Sunteți pe pagina 1din 64

Business Research Methods

Meaning of research
Research in common parlance refers to a search for knowledge. Once can also define research as a scientific
and systematic search for pertinent information on a specific topic. In fact, research is an art of scientific
investigation. The Advanced Learners Dictionary of Current English lays down the meaning of research as
a careful investigation or inquiry especially through search for new facts in any branch of knowledge.1
Redman and Mory define research as a systematized effort to gain new knowledge.2 Some people consider
research as a movement, a movement from the known to the unknown. It is actually a voyage of discovery.
We all possess the vital instinct of inquisitiveness for, when the unknown confronts us, we wonder and our
inquisitiveness makes us probe and attain full and fuller understanding of the unknown. This inquisitiveness
is the mother of all knowledge and the method, which man employs for obtaining the knowledge of whatever
the unknown, can be termed as research.
Research is an academic activity and as such the term should be used in a technical sense.
According to Clifford Woody research comprises defining and redefining problems, formulating hypothesis
or suggested solutions; collecting, organizing and evaluating data; making deductions and reaching
conclusions; and at last carefully testing the conclusions to determine whether they fit the formulating
hypothesis. D. Slesinger and M. Stephenson in the Encyclopedia of Social Sciences define research as the
manipulation of things, concepts or symbols for the purpose of generalizing to extend, correct or verify
knowledge, whether that knowledge aids in construction of theory or in the practice of an art.3 Research is,
thus, an original contribution to the existing stock of knowledge making for its advancement. It is the pursuit
of truth with the help of study, observation, comparison and experiment. In short, the search for knowledge
through objective and systematic method of finding solution to a problem is research. The systematic approach
concerning generalization and the formulation of a theory is also research. As such the term research refers
to the systematic method consisting of enunciating the problem, formulating a hypothesis, collecting the facts
or data, analyzing the facts and reaching certain conclusions either in the form of solutions(s) towards the
concerned problem or in certain generalizations for some theoretical formulation.
Objectives of research
The purpose of research is to discover answers to questions through the application of scientific procedures.
The main aim of research is to find out the truth which is hidden and which has not been discovered as yet.
Though each research study has its own specific purpose, we may think of research objectives as falling into
a number of following broad groupings:
1. To gain familiarity with a phenomenon or to achieve new insights into it (studies with this object in view
are termed as exploratory or formulate research studies);
2. To portray accurately the characteristics of a particular individual, situation or a group (studies with this
object in view are known as descriptive research studies);
3. To determine the frequency with which something occurs or with which it is associated with something else
(studies with this object in view are known as diagnostic research studies);
4. To test a hypothesis of a causal relationship between variables (such studies are known as hypothesis-testing
research studies).
Motivation in research
What makes people to undertake research? This is a question of fundamental importance. The possible motives
for doing research may be either one or more of the following:

1. Desire to get a research degree along with its consequential benefits;
2. Desire to face the challenge in solving the unsolved problems, i.e., concern over practical problems initiates
3. Desire to get intellectual joy of doing some creative work;
4. Desire to be of service to society;
5. Desire to get respectability.
However, this is not an exhaustive list of factors motivating people to undertake research studies.
Many more factors such as directives of government, employment conditions, curiosity about new things,
desire to understand causal relationships, social thinking and awakening, and the like may as well motivate
(or at times compel) people to perform research operations.
Significance of Research
All progress is born of inquiry. Doubt is often better than overconfidence, for it leads to inquiry, and
inquiry leads to invention is a famous Hudson Maxim in context of which the significance of research can
well be understood. Increased amounts of research make progress possible. Research inculcates scientific
and inductive thinking and it promotes the development of logical habits of thinking and organization.
The role of research in several fields of applied economics, whether related to business or to the economy as
a whole, has greatly increased in modern times. The increasingly complex nature of business and
government has focused attention on the use of research in solving operational problems. Research, as an aid
to economic policy, has gained added importance, both for government and business.
Research provides the basis for nearly all government policies in our economic system.
For instance, governments budgets rest in part on an analysis of the needs and desires of the people and on
the availability of revenues to meet these needs. The cost of needs has to be equated to probable revenues
and this is a field where research is most needed. Through research we can devise alternative policies and
can as well examine the consequences of each of these alternatives.
Pure and applied research
There are different types of research from pure research to applied research. But what does this mean?
Pure research or curiosity-driven research involves seeking systematically and methodically for knowledge
without having any particular application in mind. A difference is often made between pure basic researches
and focused basic research, where the second can be viewed as providing a platform for applications. Pure
research is not necessarily economically profitable in itself but may offer conditions for future innovations
and scientific breakthroughs.

Applied research: Applied research involves the systematic and methodical search for knowledge with a
specific application in mind.

Development work: In development work research findings are used to create a new product.

Sectorial research: Sectorial research includes all the concepts described above restricted to a specific social

Two more two terms are used in Sweden as well: contract research and research with individual responsibility.

Contract research: In contract research the focus of the project, its extent and the level of ambition is
determined by whoever commissions it.

Research with individual responsibility: Research with individual responsibility can be funded either by a
research council or through the budget of a higher education institution and refers to research that is justified
on sectorial or industrial grounds in which the aim is long-term development of knowledge. The researchers
themselves initiate this research and are responsible for its results.

Pure and applied research: Pure research (also known as basic or fundamental research) is exploratory
in nature and is conducted without any practical end-use in mind. It is driven by gut instinct, interest, curiosity
or intuition, and simply aims to advance knowledge and to identify/explain relationships between variables.
However, as the term fundamental suggests, pure research may provide a foundation for further, sometimes
applied research. In general, applied research is not carried out for its own sake but in order to solve specific,
practical questions or problems. It tends to be descriptive, rather than exploratory and is often based upon pure
research. However, the distinction between applied and pure research may sometimes be unclear; for example,
is research into the genetic codes of plants being carried out simply to advance knowledge or for possible
future commercial exploitation? It could be argued that the only real difference between these two categories
of research is the length of time between research and reasonably foreseeable practical applications, either in
the public or private sectors.

The terms quantitative research and qualitative research are commonly used within the research
community and implicitly indicate the nature of research being undertaken and the types of assumptions being
made. In reality, many research activities do not fall neatly into one or other category, as we shall discuss
later. However, as a staging post in our exploration of research, it is useful to discuss each term. The terms
will be explored in the next section of this theme.

Research and Scientific Method

For a clear perception of the term research, one should know the meaning of scientific method. The two
terms, research and scientific method, are closely related. Research, as we have already stated, can be
termed as an inquiry into the nature of, the reasons for, and the consequences of any particular set of
circumstances, whether these circumstances are experimentally controlled or recorded just as they occur.
Further, research implies the researcher is interested in more than particular results; he is interested in the
repeatability of the results and in their extension to more complicated and general situations. On the other
hand, the philosophy common to all research methods and techniques, although they may vary considerably
from one science to another, is usually given the name of scientific method. In this context, Karl Pearson
writes, The scientific method is one and same in the branches (of science) and that method is the method of
all logically trained minds the unity of all sciences consists alone in its methods, not its material; the man
who classifies facts of any kind whatever, who sees their mutual relation and describes their sequences, is
applying the Scientific Method and is a man of science. Scientific method is the pursuit of truth as
determined by logical considerations. The ideal of science is to achieve a systematic interrelation of facts.
Scientific method attempts to achieve this ideal by experimentation, observation, logical arguments from
accepted postulates and a combination of these three in varying proportions. In scientific method, logic aids
in formulating propositions explicitly and accurately so that their possible alternatives become clear.
Further, logic develops the consequences of such alternatives, and when these are compared with observable
phenomena, it becomes possible for the researcher or the scientist to state which alternative is most in
harmony with the observed facts. All this is done through experimentation and survey investigations which
constitute the integral parts of scientific method.
Experimentation is done to test hypotheses and to discover new relationships, if any, among variables. But
the conclusions drawn on the basis of experimental data are generally criticized for faulty assumptions,

poorly designed experiments, badly executed experiments or faulty interpretations. As such the researcher
must pay all possible attention while developing the experimental design and must state only probable
inferences. The purpose of survey investigations may also be to provide scientifically gathered information
to work as a basis for the researchers for their conclusions.
The scientific method is, thus, based on certain basic postulates which can be stated as under:
1. It relies on empirical evidence;
2. It utilizes relevant concepts;
3. It is committed to only objective considerations;
4. It presupposes ethical neutrality, i.e., it aims at nothing but making only adequate and correct statements
about population objects;
5. It results into probabilistic predictions;
6. Its methodology is made known to all concerned for critical scrutiny are for use in testing the conclusions
through replication;
7. It aims at formulating most general axioms or what can be termed as scientific theories.

Thus, the scientific method encourages a rigorous, impersonal mode of procedure dictated by the demands
of logic and objective procedure. Accordingly, scientific method implies an objective, logical and
systematic method, i.e., a method free from personal bias or prejudice, a method to ascertain demonstrable
qualities of a phenomenon capable of being verified, a method wherein the researcher is guided by the rules
of logical reasoning, a method wherein the investigation proceeds in an orderly manner and a method that
implies internal consistency.
Two Research Fallacies
A fallacy is an error in reasoning, usually based on mistaken assumptions. Researchers are familiar with all
the ways they could go wrong and the fallacies they are susceptible to. Here, I discuss two of the most
important. The ecological fallacy occurs when you make conclusions about individuals based only on
analyses of group data. For instance, assume that you measured the math scores of a particular classroom
and found that they had the highest average score in the district. Later (probably at the mall) you run into
one of the kids from that class and you think to yourself, 'She must be a math whiz.' Aha! Fallacy! Just
because she comes from the class with the highest average doesn't mean that she is automatically a high-
scorer in math. She could be the lowest math scorer in a class that otherwise consists of math geniuses.
An exception fallacy is sort of the reverse of the ecological fallacy. It occurs when you reach a group
conclusion on the basis of exceptional cases. This kind of fallacious reasoning is at the core of a lot of
sexism and racism. The stereotype is of the guy who sees a woman make a driving error and concludes that
women are terrible drivers. Wrong! Fallacy!
Both of these fallacies point to some of the traps that exist in research and in everyday reasoning. They also
point out how important it is to do research. It is important to determine empirically how individuals
perform, rather than simply rely on group averages. Similarly, it is important to look at whether there are
correlations between certain behaviors and certain groups

In logic, a distinction is often made between two broad methods of reasoning known as the deductive and
inductive approaches. Deductive reasoning works from the more general to the more specific. Sometimes
this is informally called a top-down approach. You might begin with thinking up a theory about your topic
of interest. You then narrow that down into more specific hypotheses that you can test. You narrow down
even further when you collect observations to address the hypotheses. This ultimately leads you to be able to
test the hypotheses with specific dataa confirmation (or not) of your original theories.
Inductive reasoning works the other way, moving from specific observations to broader generalizations and
theories. Informally, this is sometimes called a bottom up approach. (Please note that it's bottom up and not
bottoms up, which is the kind of thing the bartender says to customers when he's trying to close for the
night!) In inductive reasoning, you begin with specific observations and measures, begin detecting patterns
and regularities, formulate some tentative hypotheses that you can explore, and finally end up developing
some general conclusions or theories. These two methods of reasoning have a different feel to them when
you're conducting research. Inductive reasoning, by its nature, is more open-ended and exploratory,
especially at the beginning. Deductive reasoning is narrower in nature and is concerned with testing or
confirming hypotheses. Even though a particular study may look like it's purely deductive (for example, an
experiment designed to test the hypothesized effects of some treatment on some outcome), most social
research involves both inductive and deductive reasoning processes at some time in the project. Even in the
most constrained experiment, the researchers might observe patterns in the data that lead them to develop
new theories.
Observation-Pattern-Tentative Hypothesis-Theory

Ethics in Research
This is a time of profound change in the understanding of the ethics of applied social research. From the
time immediately after World War II until the early 1990s, there was a gradually developing consensus
about the key ethical principles that should underlie the research endeavor. Two marker events stand out
(among many others), as symbolic of this consensus. The Nuremberg War Crimes Trial following World
War II brought to public view the ways German scientists had used captive human subjects as subjects in
often gruesome experiments. In the 1950s and 1960s, the Tuskegee Syphilis Study involved the withholding
of known effective treatment for syphilis from African-American participants who were infected. Events
like these forced the reexamination of ethical standards and the gradual development of a consensus that
potential human subjects needed to be protected from being used as guinea pigs in scientific research.
By the 1990s, the dynamics of the situation changed. Cancer patients and persons with AIDS fought publicly
with the medical research establishment about the length of time needed to get approval for and complete
research into potential cures for fatal diseases. In many cases, it is the ethical assumptions of the previous
thirty years that drive this go-slow mentality. According to previous thinking, it is better to risk denying
treatment for a while until there is enough confidence in a treatment, than risk harming innocent people (as
in the Nuremberg and Tuskegee events). Recently, however, people threatened with fatal illness have been
saying to the research establishment that they want to be test subjects, even under experimental conditions of
considerable risk. Several vocal and articulate patient groups who wanted to be experimented on came up
against an ethical review system designed to protect them from being the subjects of experiments! Although
the last few years in the ethics of research have been tumultuous ones, a new consensus is beginning to
evolve that involves the stakeholder groups most affected by a problem participating more actively in the
formulation of guidelines for research. Although it's not entirely clear, at present, what the new consensus
will be, it is almost certain that it will not fall at either extreme: protecting against human experimentation at
all costs versus allowing anyone who is willing to be the subject of an experiment.
The Language of Ethics: As in every other aspect of research, the area of ethics has its own vocabulary. In
this section, I present some of the most important language regarding ethics in research. The principle of
voluntary participation requires that people not be coerced into participating in research. This is especially
relevant where researchers had previously relied on captive audiences for their subjectsprisons,
universities, and places like that. Closely related to the notion of voluntary participation is the requirement
of informed consent. Essentially, this means that prospective research participants must be fully informed
about the procedures and risks involved in research and must give their consent to participate. Ethical
standards also require that researchers not put participants in a situation where they might be at risk of harm
as a result of their participation. Harm can be defined as both physical and psychological. Two standards are
applied to help protect the privacy of research participants. Almost all research guarantees the participants
confidentiality; they are assured that identifying information will not be made available to anyone who is
not directly involved in the study. The stricter standard is the principle of anonymity, which essentially
means that the participant will remain anonymous throughout the study, even to the researchers themselves.
Clearly, the anonymity standard is a stronger guarantee of privacy, but it is sometimes difficult to
accomplish, especially in situations where participants have to be measured at multiple time points (for
example in a pre-post study). Increasingly, researchers have had to deal with the ethical issue of a person's
right to service. Good research practice often requires the use of a no-treatment control groupa group of
participants who do not get the treatment or program that is being studied. But when that treatment or
program may have beneficial effects, persons assigned to the no-treatment control may feel their rights to
equal access to services are being curtailed. Even when clear ethical standards and principles exist, at times
the need to do accurate research runs up against the rights of potential participants. No set of standards can
possibly anticipate every ethical circumstance. Furthermore, there needs to be a procedure that assures that
researchers will consider all relevant ethical issues in formulating research plans. To address such needs
most institutions and organizations have formulated an Institutional Review Board (IRB), a panel of
persons who reviews grant proposals with respect to ethical implications and decides whether additional
actions need to be taken to assure the safety and rights of participants. By reviewing proposals for research,
IRBs also help protect the organization and the researcher against potential legal implications of neglecting
to address important ethical issues of participants.

Concept: An abstraction encompassing observed events; a word that represents the similarities or common
aspects of objects or events that are otherwise quite different from one another. The purpose of a concept is to
simplify thinking by including a number of events (or the common aspects of otherwise diverse things) under
one general heading. Ex: Chair, dog, tree, liquid, a doughnut, etc

Concepts are abstract ideas which have been "defined" according to particular characteristics or
generalizations (constructs) about them.

Construct: Constructs are the highest highest-level abstractions of complicated objects and events, created
by combining concepts and less complex constructs. used to account for observed regularities and
relationships, and to summarize observations and explanations A concept with added meaning of having been
deliberately and con consciously invented or seriously adopted for a special scientific purpose.

1) it enters into theoretical schemes and is theoretical related in various ways to other constructs.

2) it is defined and specified so that it may be observed or measured.

Scientists measure things in three classes: direct observables, indirect observables (not experienced or
observed first hand), and constructs. These constructs are defined as constructs theoretical creations based on
observations but cannot be observed directly or observed indirectly. Ex: Motivation, visual acuity, justice,
problem solving ability.

A construct is based on concepts, or can be thought of as a conceptual model that has measurable aspects.
This will allow the researcher to "measure" the concept and have a common acceptable platform when other
researches do a similar research. Constructs are built from the logical combination of a number of more
observable concepts. In the case of source credibility, we could define the construct as the combination of the
concepts of expertise, objectivity, and status. Each of these concepts can be more directly observed in an
individual. We might also consider some of these terms to be constructs themselves, and break them down
into combinations of still more concrete concepts.

Measuring advertising effectiveness is a construct, and concepts related would be brand awareness and
consumer behavior. Pain is a concept, a theoretical model of pain would be a construct, and a pain assessment
tool would give a measurable variable.
Some definitions of constructs:

Oxford def. a) tolerant; liberal; b) giving permission

Experimental def. extending the boundaries of acceptable findings.

Measured def. confining the boundaries of acceptable findings.

Oxford def. Strengthen or support, especially with additional personnel, material etc.

Experimental def. To build credibility by strengthening your research findings.

Measured def. To build structural credibility to strengthen your research findings.

Oxford def. none

Experimental def. To understand through written text the research findings.

Measured def. same as above

Oxford def. a) something achieved b) act of achieving achieve: a) attain by effort acquire; gain earn
b) accomplish

Experimental def. To accomplish what you have set out to prove.

Measured def. To put your research findings into a written format to be used as documentation.

Oxford def. a) curiosity, concern b) quality existing curiosity c) note worthiness, importance 2) subject,
hobby in which one is concerned 3) advantage or profit 4) self-interest, excite the curiosity or attention
to take a personal interest.

Experimental def. An educational topic that concerns you and is worthy of your research.

Measured def. same as above

Oxford def. archaic of necessity, requirement

Experimental def. The requirements of research in order for it to be valid.

Measured def. The requirements for charting the research findings in order to give documented

Oxford def. none

Experimental def. To give another researcher the information needed so that they can continue
researching from where you left off.

Measured def. same as above

Oxford def. A person that leads or is followed by others.

Experimental def. A person who has the ability to direct others through a research experiment, and
who will set the tone of the research.

Measured def. A person who will design the way in which the research findings will be documented.

Oxford def. none

Experimental def. The tone of a class setting that will allow for similar testing conditions.

Measured def. same as above

Oxford def. Failing in ones duty

Experimental def. Failing to plan out the way you are going to conduct you research so that your
findings are valid.

Measured def. failing to document your findings in a way that another researcher can duplicate your
research findings.

Oxford def. none

Experimental def. Lack of agreement within a research group.

Measured def. Lack of consistent findings and charting.

Operational Definition: It describes meaning to a concept or construct by specifying the operations that must
be performed in order to measure or manipulate the concept, as the data collected during research is in terms
of observable events. It defines or gives meaning to a variable by spelling out what the investigator must do
to measure it. Operational definitions are essential to research because they permit investigators to measure
abstract concepts and constructs and permit scientists to permit move from the level of constructs and theory
to the level of observation. An operational definition translates the verbal meaning provided by the theoretical
definition into a prescription for measurement. Although they may be expressed verbally, operational
definitions are fundamentally statements that describe measurement and mathematical operations. An
operational definition describes the unit of measurement. Examples of units of measurement are minutes (to
measure time), word counts (to measure newspaper coverage of a particular event), percent correct responses,

The operational definition must be very closely associated with the theoretical definition. It must state clearly
how observations will be made so they will reflect as fully as possible the meaning associated with the verbal
concept or construct. The operational definition must tell us how to observe and quantify the concept in the
real world. This connection between theoretical and operational definitions is quite critical. This connection
establishes the validity of the measurement.
Two Types of Operational Definitions:

Measured Operational Definition: Operations by which investigators may measure a concept.

Experimental Operational Definition: Steps taken by a researcher to produce certain experimental


Examples of an Operational Definition: Measured Operational Definition: An actual (score) value from a test
or questionnaire the researchers would develop to measure hunger.

Experimental Operational Definition: A manipulated scenario to produce the condition of hunger. (Such as
preventing the subject from consuming anything for x number of hours)

Without knowing explicitly what a term means, we cannot evaluate the research or determine whether the
researcher has carried out what was proposed in the problem statement. Need not necessarily agree with such
a definition, but as long as we know what the researcher means when using the term, we are able to understand
and appraise it appropriately. A formal definition contains three parts: (a) the term to be defined; (b) the
genera, or the general class to which the concept being defined belongs; and (c) the differentia, the specific
characteristics or traits that distinguish the concept being defined from all other members of the general

Variable: Characteristics or attributes of an object, individual or organization that can be measured or

observed, and that varies among those objects or individuals being studied. They possess values and levels
(the dimensions on which they vary). The concepts that are of interest in a study become the variables for
Different Kinds of Variables:

Dichotomous: Two valued variables. Example: Sex (male/female)

Polytomous: Multiple values for variables. Example: Religion (Catholicism, Islam, Judaism,
Hinduism, Buddhism, etc)

Continuous: A variable that takes on an infinite number of values within a range. Example: Height &

Independent: The variable manipulated by the experimenter (also: Experimental, Predictor,

Manipulated, Antecedent, Treatment).

Active: Any variable that is manipulated by the researcher

Attribute: Any variable that cannot be manipulated by the researcher. For example, all human
characteristics are attribute variables: intelligence, sex, socioeconomic status etc.

Dependent: The dependent variable is the phenomenon that is the object of study and investigation
(also: Outcome, Response, Criterion, and Effect). - -

Categorical: Referred to as nominal measurements. One creates categories, and classifies all
variables that fall under this definition without rank order. All variables under the same category are
considered of equal value, and not differentiated.

Latent: An unobserved entity that stands between the independent variable and the dependent
variable, and mediates the effect of the independent variable on the dependent variable. It is dependent
on the independent variable as well as other constructs, yet still plays a role in determining the outcome
(possibly: Intervening, Mediating, Hypothetical construct).

Control: An independent variable that is measured in a study because the y potentially influence the
dependent variable. It is a more clearly defined independent variable in attempts to eliminate all bias
in regard s to its effects on the dependent variable. (Keeps the study in check). they clearly regards

Confounding: Variables not actually measured or observed in a study, yet they exist, and its influence
cannot be directly detected or understood in a study. One becomes aware of a confounding variable at
the end of a study, they realize that there is an effect that was not measured or accounted for, but should
be addressed.

Every Problem Needs Further Delineation to comprehend fully the meaning of the problem, the researcher
should eliminate any possibility of misunderstanding by
Stating the hypotheses and/or research questions: Describing the specific hypotheses being
tested or questions being asked.

Delimiting the research: Fully disclosing what the researcher intends to do and, conversely,
does not intend to do.

Defining the terms: Giving the meanings of all terms in the statements of the problem and sub
problems that have any possibility of being misunderstood.

Stating the assumptions: Presenting a clear statement of all assumptions on which the research
will rest.

These matters facilitate understanding of the research called the setting of the problem

10 | P a g e
Theoretical conceptualizations are almost as many as there are researchers conducting research and using
theories. Researchers view theory in different ways and across different research disciplines. The systematic
nature of theory is to provide explanatory leverage on a problem, describing innovative features of a
phenomenon, or providing predictive utility defined a theory as a group of logically organized laws or
relationships that constitutes explanation in a discipline. Theory that is driven by research is directly
relevant to practice and beneficial to the field. Although substantive theory is often used as a theoretical
framework and a strategic link in the formulation and generation of grounded formal theory, grounded
theory emphasizes the concept of emergence that inspires new research. It is important to note that the
process of grounded theory and substantive theory produces four primary constructs of (a) heuristics
(expansion of the existing body of knowledge, discovery, and problem solving), (b) description, (c)
delimitation, and (d) parsimonious. The relationship between theory, practice, and research is central to the
discussion of theory as a conceptualized cycle of development and facilitation.
However, theory is speculative and ones theory seems to follow ones chosen philosophical commitment,
even to a degree that advocates of different philosophical stances do not necessarily understand each others
conceptions of theory. The theoretical process puts boundaries on what is examined or studied.
Stating the Hypotheses and/or Research Questions:
Hypotheses are tentative, intelligent guesses posited for the purpose of directing ones thinking toward the
solution of the problem, necessary in searching for relevant data and in establishing a tentative goal.
Hypotheses are neither proved nor disproved. They are nothing more than tentative propositions set forth to
assist in guiding the investigation of a problem or to provide possible explanations for the observations made
either to accept or reject the hypotheses.

Hypotheses have nothing to do with proof. Their acceptance or rejection is dependent on what the data and
the data alone ultimately reveal. Hypotheses may originate in the subproblem, could be 1 to 1. Hypothesis
provides a position from which a researcher begins to initiate an exploration of problem and subproblems and
checkpoints to test the findings that the data reveal to accept/reject the hypotheses.

If the data do not support the research hypothesis, dont be disturbed it merely means that the educated guess
about the outcome of the investigation was incorrect. Frequently, rejected hypotheses are a source of genuine
and gratifying surprise truly made unexpected discovery.

Null Hypothesis: It is an indicator only & it reveals some influences, forces, or factors that have resulted in a
statistical difference or no such difference.
Null Hypothesis Dynamics:

If null hypothesis shows the presence of dynamics, then the next logical questions are as follows:

What are these dynamics?

What is their nature?

How can they be isolated and studied?

For example, lets say that a team of social workers believe that one type of after-school programme
for teenagers (well call it Programme A) is more effective than another programme (well call it
Programme B) in terms of reducing high school dropout rates.

11 | P a g e
The null hypothesis stating that there will be no difference in the high school graduation rates of
teenagers enrolled in Programme A and those enrolled in Programme B has been rejected
encouraging news it is mezzanine conclusion

What specifically were the factors within the programme that cause the null hypothesis to be rejected?

These are fundamental questions will uncover facts that may lie very close to the discovery of new
substantive knowledge the purpose of all research.

Description and explanation

Social researchers ask two fundamental types of research questions:
1 What is going on (descriptive research)?
2 Why is it going on (explanatory research)?
Descriptive research
Although some people dismiss descriptive research as `mere description', good description is
fundamental to the research enterprise and it has added immeasurably to our knowledge of the shape
and nature of our society. Descriptive research encompasses much government sponsored research
including the population census, the collection of a wide range of social indicators and economic
information such as household expenditure patterns, time use studies, employment and crime statistics
and the like.
Descriptions can be concrete or abstract. A relatively concrete description might describe the ethnic
mix of a community, the changing age of a people or the gender mix of a workplace. Alternatively the
description might ask more abstract questions such as `Is the level of social inequality increasing or
declining?', `How secular is society?' or `How much poverty is there in this community?' Accurate
descriptions of the level of unemployment or poverty have historically played a key role in social
policy reforms (Marsh, 1982). By demonstrating the existence of social problems, competent
description can challenge accepted assumptions about the way things are and can provoke action.
Good description provokes the `why' questions of explanatory research. If we detect greater social
polarization over the last 20 years (i.e. the rich are getting richer and the poor are getting poorer) we
are forced to ask `Why is this happening?' But before asking `why?' we must be sure about the fact
and dimensions of the phenomenon of increasing polarization. It is all very well to develop elaborate
theories as to why society might be more polarized now than in the recent past, but if the basic premise
is wrong (i.e. society is not becoming more polarized) then attempts to explain a non-existent
phenomenon are silly. Of course description can degenerate to mindless fact gathering or what C.W.
Mills (1959) called `abstracted empiricism'. There are plenty of examples of unfocused surveys and
case studies that report trivial information and fail to provoke any `why' questions or provide any basis
for generalization. However, this is a function of inconsequential descriptions rather than an indictment
of descriptive research itself.
Explanatory research
Explanatory research focuses on why questions. For example, it is one thing to describe the crime rate
in a country, to examine trends over time or to compare the rates in different countries. It is quite a
different thing to develop explanations about why the crime rate is as high as it is, why some types of
crime are increasing or why the rate is higher in some countries than in others. The way in which
researchers develop research designs is fundamentally affected by whether the research question is
descriptive or explanatory. It affects what information is collected. For example, if we want to explain
why some people are more likely to be apprehended and convicted of crimes we need to have hunches

12 | P a g e
about why this is so. We may have many possibly incompatible hunches and will need to collect
information that enables us to see which hunches work best empirically.
Answering the `why' questions involves developing causal explanations.
Causal explanations argue that phenomenon Y (e.g. income level) is affected by factor X (e.g. gender).
Some causal explanations will be simple while others will be more complex. For example, we might
argue that there is a direct effect of gender on income (i.e. simple gender discrimination). We might
argue for a causal chain, such as that gender affects choice of eld of training which in turn affects
occupational options, which are linked to opportunities for promotion, which in turn affect income
level. Or we could posit a more complex model involving a number of interrelated causal chains.
Prediction, correlation and causation
People often confuse correlation with causation. Simply because one event follows another, or two
factors co-vary, does not mean that one causes the other. The link between two events may be
coincidental rather than causal.
The divorce rate changed over the twentieth century the crime rate increased a few years later. But this
does not mean that divorce causes crime. Rather than divorce causing crime, divorce and crime rates
might both be due to other social. Students at fee paying private schools typically perform better in
their final year of schooling than those at government funded schools. But this need not be because
private schools produce better performance. It may be that attending a private school and better final-
year performance are both the outcome of some other cause (see later discussion).
Confusing causation with correlation also confuses prediction with causation and prediction with
explanation. Where two events or characteristics are correlated we can predict one from the other.
Knowing the type of school attended improves our capacity to predict academic achievement. But this
does not mean that the school type affects academic achievement. Predicting performance on the basis
of school type does not tell us why private school students do better. Good prediction does not depend
on causal relationships. Nor does the ability to predict accurately demonstrate anything about causality.
Recognizing that causation is more than correlation highlights a problem. While we can observe
correlation we cannot observe cause. We have to infer cause. These inferences however are
`necessarily fallible . . . [they] are only indirectly linked to observables' (Cook and Campbell, 1979:
10). Because our inferences are fallible we must minimize the chances of incorrectly saying that a
relationship is causal when in fact it is not. One of the fundamental purposes of research design in
explanatory research is to avoid invalid inferences.

What is research design?

How is the term `research design' to be used in this book? An analogy might help. When constructing
a building there is no point ordering materials or setting critical dates for completion of project stages
until we know what sort of building is being constructed. The first decision is whether we need a high
rise office building, a factory for manufacturing machinery, a school, a residential home or an
apartment block. Until this is done we cannot sketch a plan, obtain permits, work out a work schedule
or order materials.
Similarly, social research needs a design or a structure before data collection or analysis can
commence. A research design is not just a work plan. A work plan details what has to be done to
complete the project but the work plan will from the project's research design. The function of a
research design is to ensure that the evidence obtained enables us to answer the initial question as
unambiguously as possible. Obtaining relevant evidence entails specifying the type of evidence needed
to answer the research question, to test a theory, to evaluate a programme or to accurately describe

13 | P a g e
some phenomenon. In other words, when designing research we need to ask: given this research
question (or theory), what type of evidence is needed to answer the question (or test the theory) in a
convincing way?
Research design deals with a logical problem and not a logistical problem (Yin, 1989: 29). Before a
builder or architect can develop a work plan or order materials they must rst establish the type of
building required, its uses and the needs of the occupants. The work plan from this. Similarly, in social
research the issues of sampling, method of data collection (e.g. questionnaire, observation, and
document analysis), and design of questions are all subsidiary to the matter of `What evidence do I
need to collect?'
Too often researchers design questionnaires or begin interviewing far too early before thinking
through what information they require to answer their research questions. Without attending to these
research design matters at the beginning, the conclusions drawn will normally be weak and
unconvincing and fail to answer the research question.

Criteria of Good Research

Whatever may be the types of research works and studies, one thing that is important is that they all meet on
the common ground of scientific method employed by them. One expects scientific research to satisfy the
following criteria:
1. The purpose of the research should be clearly defined and common concepts be used.
2. The research procedure used should be described in sufficient detail to permit another researcher to repeat
the research for further advancement, keeping the continuity of what has already been attained.
3. The procedural design of the research should be carefully planned to yield results that are as objective as
4. The researcher should report with complete frankness, flaws in procedural design and estimate their effects
upon the findings.
5. The analysis of data should be sufficiently adequate to reveal its significance and the methods of analysis
used should be appropriate. The validity and reliability of the data should be checked carefully.
6. Conclusions should be confined to those justified by the data of the research and limited to those for which
the data provide an adequate basis.
7. Greater confidence in research is warranted if the researcher is experienced, has a good reputation in
research and is a person of integrity.
In other words, we can state the qualities of a good research as under:
1. Good research is systematic: It means that research is structured with specified steps to be taken in a
specified sequence in accordance with the well defined set of rules. Systematic characteristic of the research
does not rule out creative thinking but it certainly does reject the use of guessing and intuition in arriving at
2. Good research is logical: This implies that research is guided by the rules of logical reasoning and the
logical process of induction and deduction are of great value in carrying out research. Induction is the process
of reasoning from a part to the whole whereas deduction is the process of reasoning from some premise to a
conclusion which follows from that very premise. In fact, logical reasoning makes research more meaningful
in the context of decision making.
3. Good research is empirical: It implies that research is related basically to one or more aspects of a real
situation and deals with concrete data that provides a basis for external validity to research results.
4. Good research is replicable: This characteristic allows research results to be verified by replicating the
study and thereby building a sound basis for decisions.
14 | P a g e
Research approaches:

Research approaches are plans and the procedures for research that span the steps from broad assumptions
to detailed methods of data collection, analysis, and interpretation. This plan involves several decisions, and
they need not be taken in the order in which they make sense to me and the order of their presentation here.
The overall decision involves which approach should be used to study a topic. Informing this decision should
be the philosophical assumptions the researcher brings to the study; procedures of inquiry (called research
designs); and specific research methods of data collection, analysis, and interpretation. The selection of a
research approach is also based on the nature of the research problem or issue being addressed, the
researchers personal experiences, and the audiences for the study. Thus, in this book, research approaches,
research designs, and research methods are three key terms that represent a perspective about research that
presents information in a successive way from broad constructions of research to the narrow procedures of

Often the distinction between qualitative research and quantitative research is framed in terms of using
words (qualitative) rather than numbers (quantitative), or using closed-ended questions (quantitative hypoth-
eses) rather than open-ended questions (qualitative interview questions). A more complete way to view the
gradations of differences between them is in the basic philosophical assumptions researchers bring to the
study, the types of research strategies used in the research (e.g., quantitative experiments or qualitative case
studies), and the specific methods employed in conducting these strategies (e.g., collecting data quantitatively
on instruments versus collecting qualitative data through observing a setting). Moreover, there is a historical
evolution to both approacheswith the quantitative approaches dominating the forms of research in the social
sciences from the late 19th century up until the mid-20th century. During the latter half of the 20th century,
interest in qualitative research increased and along with it, the development of mixed methods research. With
this background, it should prove helpful to view definitions of these three key terms as used in this book:
Qualitative research is an approach for exploring and understanding the meaning individuals or
groups ascribe to a social or human problem. The process of research involves emerging questions and pro-
cedures, data typically collected in the participants setting, data analysis inductively building from
particulars to general themes, and the researcher making interpretations of the meaning of the data. The final
written report has a flexible structure. Those who engage in this form of inquiry support a way of looking at
research that honors an inductive style, a focus on individual meaning, and the importance of rendering the
complexity of a situation.
Quantitative research is an approach for testing objective theories by examining the relationship
among variables. These variables, in turn, can be measured, typically on instruments, so that numbered data
can be analyzed using statistical procedures. The final written report has a set structure consisting of
introduction, literature and theory, methods, results, and discussion. Like qualitative researchers, those who
engage in this form of inquiry have assumptions about testing theories deductively, building in protections
against bias, controlling for alternative explanations, and being able to generalize and replicate the findings.

Three components involved in an approach

Two important components in each definition are that the approach to research involves philosophical
assumptions as well as distinct methods or procedures. The broad research approach is the plan or proposal
to conduct research, involves the intersection of philosophy, research designs, and specific methods. A
framework use to explain the interaction of these three components. To reiterate, in planning a study,

15 | P a g e
researchers need to think through the philosophical worldview assumptions that they bring to the study, the
research design that is related to this worldview, and the specific methods or procedures of research that trans-
late the approach into practice.

Philosophical Worldviews

Although philosophical ideas remain largely hidden in research (Slife & Williams, 1995), they still
influence the practice of research
A Framework for ResearchInterconnection of Worldviews, Design, and Research Methods. The Selection
of a Research Approach - Quantitative (e.g., Experiments) Qualitative (e.g., Ethnographies).
RESEARCH APPROACHES Qualitative, Quantitative. Research Methods Questions, Data
Collection, Data Analysis, Interpretation, Validation

Worldviews arise based on discipline orientations, students advisors/mentors inclinations, and past research
experiences. The types of beliefs held by individual researchers based on these factors will often lead to
embracing a qualitative, quantitative approach in their research.

Research Designs

The researcher not only selects a qualitative, quantitative, or mixed methods study to conduct; the inquirer
also decides on a type of study within these three choices. Research designs are types of inquiry within
qualitative, quantitative approaches that provide specific direction for procedures in a research design. Others
have called them strategies of inquiry (Denzin & Lincoln, 2011). The designs available to the researcher have
grown over the years as computer technology has advanced our data analysis and ability to analyze complex
models and as individuals have articulated new procedures for conducting social science research.

Quantitative Designs
During the late 19th and throughout the 20th century, strategies of inquiry associated with quantitative
research were those that invoked the postpositivist worldview and that originated mainly in psychology. These
include true experiments and the less rigorous experiments called quasi-experiments (see, an original, early
treatise on this, Campbell & Stanley, 1963). An additional experimental design is applied behavioral analysis
or single-subject experiments in which an experimental treatment is administered over time to a single
individual or a small number of individuals (Cooper, Heron, & Heward, 2007; Neuman & McCormick, 1995).
One type of non-experimental quantitative research is causal-comparative research in which the investigator
compares two or more groups in terms of a cause (or independent variable) that has already happened. Another
non-experimental form of research is the correlational design in which investigators use the correlational
statistic to describe and measure the degree or association (or relationship) between two or more variables or
sets of scores (Creswell, 2012). These designs have been elaborated into more complex relationships among
variables found in techniques of structural equation modeling, hierarchical linear modeling, and logistic
regression. More recently, quantitative strategies have involved complex experiments with many variables
and treatments (e.g., factorial designs and repeated measure designs). They have also included elaborate struc-
tural equation models that incorporate causal paths and the identification of the collective strength of multiple
variables. Rather than discuss all of these quantitative approaches, I will focus on two designs: surveys and

16 | P a g e
Survey research provides a quantitative or numeric description of trends, attitudes, or opinions of a
population by studying a sample of that population. It includes cross-sectional and longitudinal studies using
questionnaires or structured interviews for data collectionwith the intent of generalizing from a sample to
a population (Fowler, 2008).
Experimental research seeks to determine if a specific treatment influences an outcome. The researcher
assesses this by providing a specific treatment to one group and withholding it from another and then
determining how both groups scored on an outcome. Experiments include true experiments, with the random
assignment of subjects to treatment conditions, and quasi-experiments that use nonrandomized assignments
(Keppel, 1991). Included within quasi-experiments are single-subject designs.
Qualitative Designs
In qualitative research, the numbers and types of approaches have also become more clearly visible during the
1990s and into the 21st century. The historic origin for qualitative research comes from anthropology,
sociology, the humanities, and evaluation. Books have summarized the various types, and complete
procedures are now available on specific qualitative inquiry approaches. For example, Clandinin and Connelly
(2000) constructed a picture of what narrative researchers do. Moustakas (1994) discussed the philosophical
tenets and the procedures of the phenomenological method; Charmaz (2006), Corbin and Strauss (2007), and
Strauss and Corbin (1990, 1998) identified the procedures of grounded theory. Fetterman (2010) and Wolcott
(2008) summarized ethnographic procedures and the many faces and research strategies of ethnography, and
Stake (1995) and Yin (2009, 2012) suggested processes involved in case study research. In this book,
illustrations are drawn from the following strategies, recognizing that approaches such as participatory action
research (Kemmis & McTaggart, 2000), discourse analysis (Cheek, 2004), and others not mentioned are also
viable ways to conduct qualitative studies:
Narrative research is a design of inquiry from the humanities in which the researcher studies the lives of
individuals and asks one or more individuals to provide stories about their lives (Riessman, 2008). This
information is then often retold or restoried by the researcher into a narrative chronology. Often, in the end,
the narrative combines views from the participants life with those of the researchers life in a collaborative
narrative (Clandinin & Connelly, 2000).
Phenomenological research is a design of inquiry coming from philosophy and psychology in which the
researcher describes the lived experiences of individuals about a phenomenon as described by participants.
This description culminates in the essence of the experiences for several individuals who have all
experienced the phenomenon. This design has strong philosophical underpinnings and typically involves
conducting interviews (Giorgi, 2009; Moustakas, 1994).
Grounded theory is a design of inquiry from sociology in which the researcher derives a general, abstract
theory of a process, action, or interaction grounded in the views of participants. This process involves using
multiple stages of data collection and the refinement and interrelationship of categories of information
(Charmaz, 2006; Corbin & Strauss, 2007).
Ethnography is a design of inquiry coming from anthropology and sociology in which the researcher studies
the shared patterns of behaviors, language, and actions of an intact cultural group in a natural setting over a
prolonged period of time. Data collection often involves observations and interviews.
Case studies are a design of inquiry found in many fields, especially evaluation, in which the researcher
develops an in-depth analysis of a case, often a program, event, activity, process, or one or more individuals.
Cases are bounded by time and activity, and researchers collect detailed information using a variety of data
collection procedures over a sustained period of time (Stake, 1995; Yin, 2009, 2012).

17 | P a g e
Formulating the research problem: There are two types of research problems, viz., those which relate to
states of nature and those which relate to relationships between variables. At the very outset the researcher
must single out the problem he wants to study, i.e., he must decide the general area of interest or aspect of a
subject-matter that he would like to inquire into. Initially the problem may be stated in a broad general way
and then the ambiguities, if any, relating to the problem be resolved. Then, the feasibility of a particular
solution has to be considered before a working formulation of the problem can be set up. The formulation of
a general topic into a specific research problem, thus, constitutes the first step in a scientific enquiry.
Essentially two steps are involved in formulating the research problem, viz., understanding the problem
thoroughly, and rephrasing the same into meaningful terms from an analytical point of view.
The best way of understanding the problem is to discuss it with ones own colleagues or with those having
some expertise in the matter. In an academic institution the researcher can seek the help from a guide who is
usually an experienced man and has several research problems in mind.
Often, the guide puts forth the problem in general terms and it is up to the researcher to narrow it down and
phrase the problem in operational terms. In private business units or in governmental organizations, the
problem is usually earmarked by the administrative agencies with whom the researcher can discuss as to
how the problem originally came about and what considerations are involved in its possible solutions.
The researcher must at the same time examine all available literature to get himself acquainted with the
selected problem. He may review two types of literaturethe conceptual literature concerning the concepts
and theories, and the empirical literature consisting of studies made earlier which are similar to the one
proposed. The basic outcome of this review will be the knowledge as to what data and other materials are
available for operational purposes which will enable the researcher to specify his own research problem in a
meaningful context. After this the researcher rephrases the problem into analytical or operational terms i.e.,
to put the problem in as specific terms as possible. This task of formulating, or defining, a research problem
is a step of greatest importance in the entire research process. The problem to be investigated must be
defined unambiguously for that will help discriminating relevant data from irrelevant ones. Care must,
however, be taken to verify the objectivity and validity of the background facts concerning the problem.
Professor W.A. Neiswanger correctly states that the statement of the objective is of basic importance
because it determines the data which are to be collected, the characteristics of the data which are relevant,
relations which are to be explored, the choice of techniques to be used in these explorations and the form of
the final report. If there are certain pertinent terms, the same should be clearly defined along with the task of
formulating the problem. In fact, formulation of the problem often follows a sequential pattern where a
number of formulations are set up, each formulation more specific than the preceding one, each one phrased
in more analytical terms, and each more realistic in terms of the available data and resources.


Sampling is the process of selecting units (e.g., people, organizations) from a population of interest so that by
studying the sample we may fairly generalize our results back to the population from which they were chosen.
Let's begin by covering some of the key terms in sampling like "population" and "sampling frame." Then,
because some types of sampling rely upon quantitative models, we'll talk about some of the statistical terms
used in sampling. Finally, we'll discuss the major distinction between probability and Nonprobability sampling
methods and work through the major types in each.

18 | P a g e
Probability Sampling

A probability sampling method is any method of sampling that utilizes some form of random selection. In
order to have a random selection method, you must set up some process or procedure that assures that the
different units in your population have equal probabilities of being chosen. Humans have long practiced
various forms of random selection, such as picking a name out of a hat, or choosing the short straw. These
days, we tend to use computers as the mechanism for generating random numbers as the basis for random

Some Definitions

Before I can explain the various probability methods we have to define some basic terms. These are:

N = the number of cases in the sampling frame

n = the number of cases in the sample
NCn = the number of combinations (subsets) of n from N
f = n/N = the sampling fraction

That's it. With those terms defined we can begin to define the different probability sampling methods.

Simple Random Sampling

The simplest form of random sampling is called simple random sampling. Pretty tricky, huh? Here's the
quick description of simple random sampling:

Objective: To select n units out of N such that each NCn has an equal chance of being selected.
Procedure: Use a table of random numbers, a computer random number generator, or a mechanical
device to select the sample.

A somewhat stilted, if accurate, definition.

Let's see if we can make it a little more real.
How do we select a simple random sample?
Let's assume that we are doing some research
with a small service agency that wishes to
assess client's views of quality of service over
the past year. First, we have to get the
sampling frame organized. To accomplish
this, we'll go through agency records to identify every client over the past 12 months. If we're lucky, the
agency has good accurate computerized records and can quickly produce such a list. Then, we have to actually
draw the sample. Decide on the number of clients you would like to have in the final sample. For the sake of
the example, let's say you want to select 100 clients to survey and that there were 1000 clients over the past
12 months. Then, the sampling fraction is f = n/N = 100/1000 = .10 or 10%. Now, to actually draw the sample,
you have several options. You could print off the list of 1000 clients, tear then into separate strips, put the
strips in a hat, mix them up real good, close your eyes and pull out the first 100. But this mechanical procedure
would be tedious and the quality of the sample would depend on how thoroughly you mixed them up and how
randomly you reached in. Perhaps a better procedure would be to use the kind of ball machine that is popular
with many of the state lotteries. You would need three sets of balls numbered 0 to 9, one set for each of the
19 | P a g e
digits from 000 to 999 (if we select 000 we'll call that 1000). Number the list of names from 1 to 1000 and
then use the ball machine to select the three digits that selects each person. The obvious disadvantage here is
that you need to get the ball machines. (Where do they make those things, anyway? Is there a ball machine

Neither of these mechanical procedures is very feasible and, with the development of inexpensive computers
there is a much easier way. Here's a simple procedure that's especially useful if you have the names of the
clients already on the computer. Many computer programs can generate a series of random numbers. Let's
assume you can copy and paste the list of client names into a column in an EXCEL spreadsheet. Then, in the
column right next to it paste the function =RAND() which is EXCEL's way of putting a random number
between 0 and 1 in the cells. Then, sort both columns -- the list of names and the random number -- by the
random numbers. This rearranges the list in random order from the lowest to the highest random number.
Then, all you have to do is take the first hundred names in this sorted list pretty simple. You could probably
accomplish the whole thing in under a minute.

Simple random sampling is simple to accomplish and is easy to explain to others. Because simple random
sampling is a fair way to select a sample, it is reasonable to generalize the results from the sample back to the
population. Simple random sampling is not the most statistically efficient method of sampling and you may,
just because of the luck of the draw, not get good representation of subgroups in a population. To deal with
these issues, we have to turn to other sampling methods.

Stratified Random Sampling

Stratified Random Sampling, also sometimes called proportional or quota random sampling, involves
dividing your population into homogeneous subgroups and then taking a simple random sample in each
subgroup. In more formal terms:

Objective: Divide the population into non-overlapping groups (i.e., strata) N1, N2, N3, ... Ni, such that N1 +
N2 + N3 + ... + Ni = N. Then do a simple random sample of f = n/N in each strata.

There are several major reasons why you might prefer stratified sampling over simple random sampling. First,
it assures that you will be able to represent not only the overall population, but also key subgroups of the
population, especially small minority groups. If you want to be able to talk about subgroups, this may be the
only way to effectively assure you'll be able to. If the subgroup is extremely small, you can use different
sampling fractions (f) within the different strata to randomly over-sample the small group (although you'll
then have to weight the within-group estimates using the sampling fraction whenever you want overall
population estimates). When we use the same sampling fraction within strata we are conducting proportionate
stratified random sampling. When we use different sampling fractions in the strata, we call this
disproportionate stratified random sampling. Second, stratified random sampling will generally have more
statistical precision than simple random sampling. This will only be true if the strata or groups are
homogeneous. If they are, we expect that the variability within-groups is lower than the variability for the
population as a whole. Stratified sampling capitalizes on that fact.

20 | P a g e
For example, let's say that the
population of clients for our
agency can be divided into
three groups: Caucasian,
African-American and
Furthermore, let's assume that
both the African-Americans
and Hispanic-Americans are
relatively small minorities of
the clientele (10% and 5%
respectively). If we just did a
simple random sample of
n=100 with a sampling
fraction of 10%, we would
expect by chance alone that we would only get 10 and 5 persons from each of our two smaller groups. And,
by chance, we could get fewer than that! If we stratify, we can do better. First, let's determine how many
people we want to have in each group. Let's say we still want to take a sample of 100 from the population of
1000 clients over the past year. But we think that in order to say anything about subgroups we will need at
least 25 cases in each group. So, let's sample 50 Caucasians, 25 African-Americans, and 25 Hispanic-
Americans. We know that 10% of the population, or 100 clients, are African-American. If we randomly sample
25 of these, we have a within-stratum sampling fraction of 25/100 = 25%. Similarly, we know that 5% or 50
clients are Hispanic-American. So our within-stratum sampling fraction will be 25/50 = 50%. Finally, by
subtraction we know that there are 850 Caucasian clients. Our within-stratum sampling fraction for them is
50/850 = about 5.88%. Because the groups are more homogeneous within-group than across the population
as a whole, we can expect greater statistical precision (less variance). And, because we stratified, we know we
will have enough cases from each group to make meaningful subgroup inferences.

Systematic Random Sampling

Here are the steps you need to follow in order to achieve a systematic random sample:

number the units in the population from 1 to N

decide on the n (sample size) that you want or need
k = N/n = the interval size
randomly select an integer between 1 to k
then take every kth unit

21 | P a g e
All of this will be much clearer
with an example. Let's assume
that we have a population that
only has N=100 people in it
and that you want to take a
sample of n=20. To use
systematic sampling, the
population must be listed in a
random order. The sampling
fraction would be f = 20/100 =
20%. in this case, the interval
size, k, is equal to N/n =
100/20 = 5. Now, select a
random integer from 1 to 5. In
our example, imagine that you
chose 4. Now, to select the
sample, start with the 4th unit
in the list and take every k-th unit (every 5th, because k=5). You would be sampling units 4, 9, 14, 19, and so
on to 100 and you would wind up with 20 units in your sample.

For this to work, it is essential that the units in the population are randomly ordered, at least with respect to
the characteristics you are measuring. Why would you ever want to use systematic random sampling? For one
thing, it is fairly easy to do. You only have to select a single random number to start things off. It may also be
more precise than simple random sampling. Finally, in some situations there is simply no easier way to do
random sampling. For instance, I once had to do a study that involved sampling from all the books in a library.
Once selected, I would have to go to the shelf, locate the book, and record when it last circulated. I knew that
I had a fairly good sampling frame in the form of the shelf list (which is a card catalog where the entries are
arranged in the order they occur on the shelf). To do a simple random sample, I could have estimated the total
number of books and generated random numbers to draw the sample; but how would I find book #74,329
easily if that is the number I selected? I couldn't very well count the cards until I came to 74,329! Stratifying
wouldn't solve that problem either. For instance, I could have stratified by card catalog drawer and drawn a
simple random sample within each drawer. But I'd still be stuck counting cards. Instead, I did a systematic
random sample. I estimated the number of books in the entire collection. Let's imagine it was 100,000. I
decided that I wanted to take a sample of 1000 for a sampling fraction of 1000/100,000 = 1%. To get the
sampling interval k, I divided N/n = 100,000/1000 = 100. Then I selected a random integer between 1 and
100. Let's say I got 57. Next I did a little side study to determine how thick a thousand cards are in the card
catalog (taking into account the varying ages of the cards). Let's say that on average I found that two cards
that were separated by 100 cards were about .75 inches apart in the catalog drawer. That information gave me
everything I needed to draw the sample. I counted to the 57th by hand and recorded the book information.
Then, I took a compass. (Remember those from your high-school math class? They're the funny little metal
instruments with a sharp pin on one end and a pencil on the other that you used to draw circles in geometry
class.) Then I set the compass at .75", stuck the pin end in at the 57th card and pointed with the pencil end to
the next card (approximately 100 books away). In this way, I approximated selecting the 157th, 257th, 357th,
and so on. I was able to accomplish the entire selection procedure in very little time using this systematic
random sampling approach. I'd probably still be there counting cards if I'd tried another random sampling

22 | P a g e
method. (Okay, so I have no life. I got compensated nicely, I don't mind saying, for coming up with this

Cluster (Area) Random Sampling

The problem with random sampling methods when we have to sample a population that's disbursed across a
wide geographic region is that you will have to cover a lot of ground geographically in order to get to each of
the units you sampled. Imagine taking a simple random sample of all the residents of New York State in order
to conduct personal interviews. By the luck of the draw you will wind up with respondents who come from
all over the state. Your interviewers are going to have a lot of traveling to do. It is for precisely this problem
that cluster or area random sampling was invented.

In cluster sampling, we follow these steps:

divide population into clusters (usually along geographic boundaries)

randomly sample clusters
measure all units within sampled clusters

For instance, in the figure we

see a map of the counties in
New York State. Let's say that
we have to do a survey of town
governments that will require
us going to the towns
personally. If we do a simple
random sample state-wide
we'll have to cover the entire
state geographically. Instead,
we decide to do a cluster
sampling of five counties
(marked in red in the figure).
Once these are selected, we go
to every town government in
the five areas. Clearly this
strategy will help us to
economize on our mileage.
Cluster or area sampling, then,
is useful in situations like this, and is done primarily for efficiency of administration. Note also, that we
probably don't have to worry about using this approach if we are conducting a mail or telephone survey
because it doesn't matter as much (or cost more or raise inefficiency) where we call or send letters to.

Multi-Stage Sampling

The four methods we've covered so far -- simple, stratified, systematic and cluster -- are the simplest random
sampling strategies. In most real applied social research, we would use sampling methods that are considerably
more complex than these simple variations. The most important principle here is that we can combine the

23 | P a g e
simple methods described earlier in a variety of useful ways that help us address our sampling needs in the
most efficient and effective manner possible. When we combine sampling methods, we call this multi-stage

For example, consider the idea of sampling New York State residents for face-to-face interviews. Clearly we
would want to do some type of cluster sampling as the first stage of the process. We might sample townships
or census tracts throughout the state. But in cluster sampling we would then go on to measure everyone in the
clusters we select. Even if we are sampling census tracts we may not be able to measure everyone who is in
the census tract. So, we might set up a stratified sampling process within the clusters. In this case, we would
have a two-stage sampling process with stratified samples within cluster samples. Or, consider the problem of
sampling students in grade schools. We might begin with a national sample of school districts stratified by
economics and educational level. Within selected districts, we might do a simple random sample of schools.
Within schools, we might do a simple random sample of classes or grades. And, within classes, we might even
do a simple random sample of students. In this case, we have three or four stages in the sampling process and
we use both stratified and simple random sampling. By combining different sampling methods we are able to
achieve a rich variety of probabilistic sampling methods that can be used in a wide range of social research

Nonprobability Sampling

The difference between nonprobability and probability sampling is that nonprobability sampling does not
involve random selection and probability sampling does. Does that mean that nonprobability samples aren't
representative of the population? Not necessarily. But it does mean that nonprobability samples cannot depend
upon the rationale of probability theory. At least with a probabilistic sample, we know the odds or probability
that we have represented the population well. We are able to estimate confidence intervals for the statistic.
With nonprobability samples, we may or may not represent the population well, and it will often be hard for
us to know how well we've done so. In general, researchers prefer probabilistic or random sampling methods
over non-probabilistic ones, and consider them to be more accurate and rigorous. However, in applied social
research there may be circumstances where it is not feasible, practical or theoretically sensible to do random
sampling. Here, we consider a wide range of non-probabilistic alternatives.

We can divide nonprobability sampling methods into two broad types: accidental or purposive. Most sampling
methods are purposive in nature because we usually approach the sampling problem with a specific plan in
mind. The most important distinctions among these types of sampling methods are the ones between the
different types of purposive sampling approaches.

Accidental, Haphazard or Convenience Sampling

One of the most common methods of sampling goes under the various titles listed here. I would include in this
category the traditional "man on the street" (of course, now it's probably the "person on the street") interviews
conducted frequently by television news programs to get a quick (although non-representative) reading of
public opinion. I would also argue that the typical use of college students in much psychological research is
primarily a matter of convenience. (You don't really believe that psychologists use college students because
they believe they're representative of the population at large, do you?). In clinical practice, we might use
clients who are available to us as our sample. In many research contexts, we sample simply by asking for
volunteers. Clearly, the problem with all of these types of samples is that we have no evidence that they are

24 | P a g e
representative of the populations we're interested in generalizing to -- and in many cases we would clearly
suspect that they are not.

Purposive Sampling

In purposive sampling, we sample with a purpose in mind. We usually would have one or more specific
predefined groups we are seeking. For instance, have you ever run into people in a mall or on the street who
are carrying a clipboard and who are stopping various people and asking if they could interview them? Most
likely they are conducting a purposive sample (and most likely they are engaged in market research). They
might be looking for Caucasian females between 30-40 years old. They size up the people passing by and
anyone who looks to be in that category they stop to ask if they will participate. One of the first things they're
likely to do is verify that the respondent does in fact meet the criteria for being in the sample. Purposive
sampling can be very useful for situations where you need to reach a targeted sample quickly and where
sampling for proportionality is not the primary concern. With a purposive sample, you are likely to get the
opinions of your target population, but you are also likely to overweight subgroups in your population that are
more readily accessible.

All of the methods that follow can be considered subcategories of purposive sampling methods. We might
sample for specific groups or types of people as in modal instance, expert, or quota sampling. We might
sample for diversity as in heterogeneity sampling. Or, we might capitalize on informal social networks to
identify specific respondents who are hard to locate otherwise, as in snowball sampling. In all of these methods
we know what we want -- we are sampling with a purpose.

Modal Instance Sampling

In statistics, the mode is the most frequently occurring value in a distribution. In sampling, when we do a
modal instance sample, we are sampling the most frequent case, or the "typical" case. In a lot of informal
public opinion polls, for instance, they interview a "typical" voter. There are a number of problems with this
sampling approach. First, how do we know what the "typical" or "modal" case is? We could say that the modal
voter is a person who is of average age, educational level, and income in the population. But, it's not clear that
using the averages of these is the fairest (consider the skewed distribution of income, for instance). And, how
do you know that those three variables -- age, education, income -- are the only or even the most relevant for
classifying the typical voter? What if religion or ethnicity is an important discriminator? Clearly, modal
instance sampling is only sensible for informal sampling contexts.

Expert Sampling

Expert sampling involves the assembling of a sample of persons with known or demonstrable experience and
expertise in some area. Often, we convene such a sample under the auspices of a "panel of experts." There are
actually two reasons you might do expert sampling. First, because it would be the best way to elicit the views
of persons who have specific expertise. In this case, expert sampling is essentially just a specific sub case of
purposive sampling. But the other reason you might use expert sampling is to provide evidence for the validity
of another sampling approach you've chosen. For instance, let's say you do modal instance sampling and are
concerned that the criteria you used for defining the modal instance are subject to criticism. You might
convene an expert panel consisting of persons with acknowledged experience and insight into that field or
topic and ask them to examine your modal definitions and comment on their appropriateness and validity. The

25 | P a g e
advantage of doing this is that you aren't out on your own trying to defend your decisions -- you have some
acknowledged experts to back you. The disadvantage is that even the experts can be, and often are, wrong.

Quota Sampling

In quota sampling, you select people non-randomly, according to some fixed quota. There are two types of
quota sampling: proportional and non-proportional. In proportional quota sampling you want to represent
the major characteristics of the population by sampling a proportional amount of each. For instance, if you
know the population has 40% women and 60% men, and that you want a total sample size of 100, you will
continue sampling until you get those percentages and then you will stop. So, if you've already got the 40
women for your sample, but not the sixty men, you will continue to sample men but even if legitimate women
respondents come along, you will not sample them because you have already "met your quota." The problem
here (as in much purposive sampling) is that you have to decide the specific characteristics on which you will
base the quota. Will it be by gender, age, education race, religion, etc.?

Non proportional quota sampling is a bit less restrictive. In this method, you specify the minimum number
of sampled units you want in each category. here, you're not concerned with having numbers that match the
proportions in the population. Instead, you simply want to have enough to assure that you will be able to talk
about even small groups in the population. This method is the nonprobabilistic analogue of stratified random
sampling in that it is typically used to assure that smaller groups are adequately represented in your sample.

Heterogeneity Sampling

We sample for heterogeneity when we want to include all opinions or views, and we aren't concerned about
representing these views proportionately. Another term for this is sampling for diversity. In many
brainstorming or nominal group processes (including concept mapping), we would use some form of
heterogeneity sampling because our primary interest is in getting broad spectrum of ideas, not identifying the
"average" or "modal instance" ones. In effect, what we would like to be sampling is not people, but ideas. We
imagine that there is a universe of all possible ideas relevant to some topic and that we want to sample this
population, not the population of people who have the ideas. Clearly, in order to get all of the ideas, and
especially the "outlier" or unusual ones, we have to include a broad and diverse range of participants.
Heterogeneity sampling is, in this sense, almost the opposite of modal instance sampling.

Snowball Sampling

In snowball sampling, you begin by identifying someone who meets the criteria for inclusion in your study.
You then ask them to recommend others who they may know who also meet the criteria. Although this method
would hardly lead to representative samples, there are times when it may be the best method available.
Snowball sampling is especially useful when you are trying to reach populations that are inaccessible or hard
to find. For instance, if you are studying the homeless, you are not likely to be able to find good lists of
homeless people within a specific geographical area. However, if you go to that area and identify one or two,
you may find that they know very well who the other homeless people in their vicinity are and how you can
find them.

26 | P a g e

Measurement is the process observing and recording the observations that are collected as part of a research
effort. There are two major issues that will be considered here.

First, you have to understand the fundamental ideas involved in measuring. Here we consider two of major
measurement concepts. In Levels of Measurement, I explain the meaning of the four major levels of
measurement: nominal, ordinal, interval and ratio. Then we move on to the reliability of measurement,
including consideration of true score theory and a variety of reliability estimators.

Second, you have to understand the different types of measures that you might use in social research. We
consider four broad categories of measurements. Survey research includes the design and implementation of
interviews and questionnaires. Scaling involves consideration of the major methods of developing and
implementing a scale. Qualitative research provides an overview of the broad range of non-numerical
measurement approaches. And unobtrusive measures presents a variety of measurement methods that don't
intrude on or interfere with the context of the research.Levels of Measurement

The level of measurement

refers to the relationship
among the values that are
assigned to the attributes for
a variable. What does that
mean? Begin with the idea of
the variable, in this example
"party affiliation." That
variable has a number of
attributes. Let's assume that
in this particular election
context the only relevant
attributes are "republican",
"democrat", and
"independent". For purposes
of analyzing the results of this variable, we arbitrarily assign the values 1, 2 and 3 to the three attributes. The
level of measurement describes the relationship among these three values. In this case, we simply are using
the numbers as shorter placeholders for the lengthier text terms. We don't assume that higher values mean
"more" of something and lower numbers signify "less". We don't assume the the value of 2 means that
democrats are twice something that republicans are. We don't assume that republicans are in first place or have
the highest priority just because they have the value of 1. In this case, we only use the values as a shorter name
for the attribute. Here, we would describe the level of measurement as "nominal".

Why is Level of Measurement Important?

First, knowing the level of measurement helps you decide how to interpret the data from that variable. When
you know that a measure is nominal (like the one just described), then you know that the numerical values are
just short codes for the longer names. Second, knowing the level of measurement helps you decide what

27 | P a g e
statistical analysis is appropriate on the values that were assigned. If a measure is nominal, then you know that
you would never average the data values or do a t-test on the data.

There are typically four levels of measurement that are defined:


In nominal measurement the numerical values just "name" the attribute uniquely. No ordering of the cases is
implied. For example, jersey numbers in basketball are measures at the nominal level. A player with number
30 is not more of anything than a player with number 15, and is certainly not twice whatever number 15 is.

In ordinal measurement the attributes can be rank-ordered. Here, distances between attributes do not have
any meaning. For example, on a survey you might code Educational Attainment as 0=less than H.S.; 1=some
H.S.; 2=H.S. degree; 3=some college; 4=college degree; 5=post college. In this measure, higher numbers
mean more education. But is distance from 0 to 1 same as 3 to 4? Of course not. The interval between values
is not interpretable in an ordinal measure.

In interval measurement the distance

between attributes does have meaning. For
example, when we measure temperature (in
Fahrenheit), the distance from 30-40 is same
as distance from 70-80. The interval between
values is interpretable. Because of this, it
makes sense to compute an average of an
interval variable, where it doesn't make sense
to do so for ordinal scales. But note that in
interval measurement ratios don't make any
sense - 80 degrees is not twice as hot as 40
degrees (although the attribute value is twice
as large).

Finally, in ratio measurement there is always an absolute zero that is meaningful. This means that you can
construct a meaningful fraction (or ratio) with a ratio variable. Weight is a ratio variable. In applied social
research most "count" variables are ratio, for example, the number of clients in past six months. Why? Because
you can have zero clients and because it is meaningful to say that "...we had twice as many clients in the past
six months as we did in the previous six months."

It's important to recognize that there is a hierarchy implied in the level of measurement idea. At lower levels
of measurement, assumptions tend to be less restrictive and data analyses tend to be less sensitive. At each
level up the hierarchy, the current level includes all of the qualities of the one below it and adds something
new. In general, it is desirable to have a higher level of measurement (e.g., interval or ratio) rather than a lower
one (nominal or ordinal).

28 | P a g e

Scaling is the branch of measurement that involves the construction of an instrument that associates qualitative
constructs with quantitative metric units. Scaling evolved out of efforts in psychology and education to
measure "unmeasurable" constructs like authoritarianism and self esteem. In many ways, scaling remains one
of the most arcane and misunderstood aspects of social research measurement. And, it attempts to do one of
the most difficult of research tasks -- measure abstract concepts.

Most people don't even understand what scaling is. The basic idea of scaling is described in General Issues in
Scaling, including the important distinction between a scale and a response format. Scales are generally
divided into two broad categories: unidimensional and multidimensional. The unidimensional scaling methods
were developed in the first half of the twentieth century and are generally named after their inventor. We'll
look at three types of unidimensional scaling methods here:

Thurstone or Equal-Appearing Interval Scaling

Likert or "Summative" Scaling
Guttman or "Cumulative" Scaling

In the late 1950s and early 1960s, measurement theorists developed more advanced techniques for creating
multidimensional scales. Although these techniques are not considered here, you may want to look at the
method of concept mapping that relies on that approach to see the power of these multivariate methods.

Likert Scaling

Like Thurstone or Guttman Scaling, Likert Scaling is a unidimensional scaling method. Here, I'll explain the
basic steps in developing a Likert or "Summative" scale.

Defining the Focus. As in all scaling methods, the first step is to define what it is you are trying to measure.
Because this is a unidimensional scaling method, it is assumed that the concept you want to measure is one-
dimensional in nature. You might operationalize the definition as an instruction to the people who are going
to create or generate the initial set of candidate items for your scale.

Generating the Items. next, you

have to create the set of potential
scale items. These should be items
that can be rated on a 1-to-5 or 1-
to-7 Disagree-Agree response
scale. Sometimes you can create
the items by yourself based on your
intimate understanding of the
subject matter. But, more often than
not, it's helpful to engage a number
of people in the item creation step.
For instance, you might use some
form of brainstorming to create the

29 | P a g e
items. It's desirable to have as large a set of potential items as possible at this stage, about 80-100 would be

Rating the Items. The next step is to have a group of judges rate the
items. Usually you would use a 1-to-5 rating scale where:

1. = strongly unfavorable to the concept

2. = somewhat unfavorable to the concept
3. = undecided
4. = somewhat favorable to the concept
5. = strongly favorable to the concept

Notice that, as in other scaling methods, the judges are not telling you
what they believe -- they are judging how favorable each item is with
respect to the construct of interest.

Selecting the Items. The next step is to compute the intercorrelations between all pairs of items, based on the
ratings of the judges. In making judgements about which items to retain for the final scale there are several
analyses you can do:

Throw out any items that have a low correlation with the total (summed) score across all items

In most statistics packages it is relatively easy to compute this type of Item-Total correlation. First,
you create a new variable which is the sum of all of the individual items for each respondent. Then,
you include this variable in the correlation matrix computation (if you include it as the last variable in
the list, the resulting Item-Total correlations will all be the last line of the correlation matrix and will
be easy to spot). How low should the correlation be for you to throw out the item? There is no fixed
rule here -- you might eliminate all items with a correlation with the total score less that .6, for example.

For each item, get the average rating for the top quarter of judges and the bottom quarter. Then, do a
t-test of the differences between the mean value for the item for the top and bottom quarter judges.

Higher t-values mean that there is a greater difference between the highest and lowest judges. In more
practical terms, items with higher t-values are better discriminators, so you want to keep these items.
In the end, you will have to use your judgement about which items are most sensibly retained. You
want a relatively small number of items on your final scale (e.g., 10-15) and you want them to have
high Item-Total correlations and high discrimination (e.g., high t-values).

Administering the Scale. You're now ready to use your Likert scale. Each respondent is asked to rate each
item on some response scale. For instance, they could rate each item on a 1-to-5 response scale where:

1. = strongly disagree
2. = disagree
3. = undecided
4. = agree
5. = strongly agree

30 | P a g e
There are a variety possible response scales (1-to-7, 1-to-9, 0-to-4). All of these odd-numbered scales have a
middle value is often labeled Neutral or Undecided. It is also possible to use a forced-choice response scale
with an even number of responses and no middle neutral or undecided choice. In this situation, the respondent
is forced to decide whether they lean more towards the agree or disagree end of the scale for each item.

The final score for the respondent on the scale is the sum of their ratings for all of the items (this is why this
is sometimes called a "summated" scale). On some scales, you will have items that are reversed in meaning
from the overall direction of the scale. These are called reversal items. You will need to reverse the response
value for each of these items before summing for the total. That is, if the respondent gave a 1, you make it a
5; if they gave a 2 you make it a 4; 3 = 3; 4 = 2; and, 5 = 1.

Example: The Employment Self Esteem Scale

Here's an example of a ten-item Likert Scale that attempts to estimate the level of self esteem a person has on
the job. Notice that this instrument has no center or neutral point -- the respondent has to declare whether
he/she is in agreement or disagreement with the item.

INSTRUCTIONS: Please rate how strongly you agree or disagree with each of the following statements by
placing a check mark in the appropriate box.

Strongly Somewhat Somewhat Strongly 1. I feel good about my work on the job.
Disagree Disagree Agree Agree

Strongly Somewhat Somewhat Strongly 2. On the whole, I get along well with others at work.
Disagree Disagree Agree Agree

3. I am proud of my ability to cope with difficulties

Strongly Somewhat Somewhat Strongly
at work.
Disagree Disagree Agree Agree

4. When I feel uncomfortable at work, I know how

Strongly Somewhat Somewhat Strongly
to handle it.
Disagree Disagree Agree Agree

5. I can tell that other people at work are glad to have

Strongly Somewhat Somewhat Strongly
me there.
Disagree Disagree Agree Agree

6. I know I'll be able to cope with work for as long

Strongly Somewhat Somewhat Strongly
as I want.
Disagree Disagree Agree Agree

7. I am proud of my relationship with my supervisor

Strongly Somewhat Somewhat Strongly
at work.
Disagree Disagree Agree Agree

8. I am confident that I can handle my job without

Strongly Somewhat Somewhat Strongly
constant assistance.
Disagree Disagree Agree Agree
31 | P a g e
Strongly Somewhat Somewhat Strongly 9. I feel like I make a useful contribution at work.
Disagree Disagree Agree Agree

Strongly Somewhat Somewhat Strongly 10. I can tell that my coworkers respect me.
Disagree Disagree Agree Agree

Quantitative and Qualitative Data collection methods

The Quantitative data collection methods rely on random sampling and structured data collection
instruments that fit diverse experiences into predetermined response categories. They produce results that are
easy to summarize, compare, and generalize.

Quantitative research is concerned with testing hypotheses derived from theory and/or being able to estimate
the size of a phenomenon of interest. Depending on the research question, participants may be randomly
assigned to different treatments. If this is not feasible, the researcher may collect data on participant and
situational characteristics in order to statistically control for their influence on the dependent, or outcome,
variable. If the intent is to generalize from the research participants to a larger population, the researcher will
employ probability sampling to select participants.

Typical quantitative data gathering strategies include:

Experiments/clinical trials.
Observing and recording well-defined events (e.g., counting the number of patients waiting in
emergency at specified times of the day).
Obtaining relevant data from management information systems.
Administering surveys with closed-ended questions (e.g., face-to face and telephone interviews,
questionnaires etc).(


In Quantitative research (survey research), interviews are more structured than in Qualitative research.

In a structured interview, the researcher asks a standard set of questions and nothing more.(Leedy and Ormrod,

Face -to -face interviews have a distinct advantage of enabling the researcher to establish rapport with
potential participants and there for gain their co operation. These interviews yield highest response rates in
survey research. They also allow the researcher to clarify ambiguous answers and when appropriate, seek
follow-up information. Disadvantages include impractical when large samples are involved time consuming
and expensive (Leedy and Ormrod, 2001).

Telephone interviews are less time consuming and less expensive and the researcher has ready access to
anyone on the planet that has a telephone. Disadvantages are that the response rate is not as high as the face-
to- face interview as but considerably higher than the mailed questionnaire. The sample may be biased to the

32 | P a g e
extent that people without phones are part of the population about whom the researcher wants to draw

Computer Assisted Personal Interviewing (CAPI): is a form of personal interviewing, but instead of
completing a questionnaire, the interviewer brings along a laptop or hand-held computer to enter the
information directly into the database. This method saves time involved in processing the data, as well as
saving the interviewer from carrying around hundreds of questionnaires. However, this type of data collection
method can be expensive to set up and requires that interviewers have computer and typing skills.


Paper-pencil-questionnaires can be sent to a large number of people and saves the researcher time and
money. People are more truthful while responding to the questionnaires regarding controversial issues in
particular due to the fact that their responses are anonymous. But they also have drawbacks. Majority of the
people who receive questionnaires don't return them and those who do might not be representative of the
originally selected sample.(Leedy and Ormrod, 2001)

Web based questionnaires: A new and inevitably growing methodology is the use of Internet based research.
This would mean receiving an e-mail on which you would click on an address that would take you to a secure
web-site to fill in a questionnaire. This type of research is often quicker and less detailed. Some disadvantages
of this method include the exclusion of people who do not have a computer or are unable to access a computer.
Also the validity of such surveys is in question as people might be in a hurry to complete it and so might not
give accurate responses. Questionnaires often make use of Checklist and rating scales. These devices help
simplify and quantify people's behaviors and attitudes. A checklist is a list of behaviors, characteristics, or
other entities that to researcher is looking for. Either the researcher or survey participant simply checks
whether each item on the list is observed, present or true or vice versa. A rating scale is more useful when a
behavior needs to be evaluated on a continuum. They are also known as Likert scales. (Leedy and Ormrod,

Qualitative data collection methods play an important role in impact evaluation by providing information
useful to understand the processes behind observed results and assess changes in peoples perceptions of their
well-being. Furthermore qualitative methods can be used to improve the quality of survey-based quantitative
evaluations by helping generate evaluation hypothesis; strengthening the design of survey questionnaires and
expanding or clarifying quantitative evaluation findings. These methods are characterized by the following

they tend to be open-ended and have less structured protocols (i.e., researchers may change the data
collection strategy by adding, refining, or dropping techniques or informants)
they rely more heavily on interactive interviews; respondents may be interviewed several times to
follow up on a particular issue, clarify concepts or check the reliability of data
they use triangulation to increase the credibility of their findings (i.e., researchers rely on multiple data
collection methods to check the authenticity of their results)
generally their findings are not generalizable to any specific population, rather each case study
produces a single piece of evidence that can be used to seek general patterns among different studies
of the same issue

33 | P a g e
Regardless of the kinds of data involved, data collection in a qualitative study takes a great deal of time. The
researcher needs to record any potentially useful data thoroughly, accurately, and systematically, using field
notes, sketches, audiotapes, photographs and other suitable means. The data collection methods must observe
the ethical principles of research.

The qualitative methods most commonly used in evaluation can be classified in three broad categories:

in depth interview
observation methods
document review

Observational method

Observation is way of gathering data by watching behavior, events, or noting physical characteristics in their
natural setting. Observations can be overt (everyone knows they are being observed) or covert (no one knows
they are being observed and the observer is concealed). The benefit of covert observation is that people are
more likely to behave naturally if they do not know they are being observed. However, you will typically
need to conduct overt observations because of ethical problems related to concealing your observation.
Observations can also be either direct or indirect. Direct observation is when you watch interactions,
processes, or behaviors as they occur; for example, observing a teacher teaching a lesson from a written
curriculum to determine whether they are delivering it with fidelity. Indirect observations are when you watch
the results of interactions, processes, or behaviors; for example, measuring the amount of plate waste left by
students in a school cafeteria to determine whether a new food is acceptable to them. When should you use
observation for evaluation?

When you are trying to understand an ongoing process or situation, through observation you can monitor or
watch a process or situation that your are evaluating as it occurs. When you are gathering data on individual
behaviors or interactions between people, Observation allows you to watch peoples behaviors and
interactions directly, or watch for the results of behaviors or interactions. When you need to know about a
physical setting, seeing the place or environment where something takes place can help increase your
understanding of the event, activity, or situation you are evaluating. For example, you can observe whether a
classroom or training facility is conducive to learning. When data collection from individuals is not a realistic
option & if respondents are unwilling or unable to provide data through questionnaires or interviews,
observation is a method that requires little from the individuals for whom you need data. How do you plan for
observations? Determine the focus. Think about the evaluation question(s) you want to answer through
observation and select a few areas of focus for your data collection. For example, you may want to know how
well a new curriculum is being implemented in the classroom. Your focus areas might be interactions between
students and teachers, and teachers knowledge, skills, and behaviors. Design a system for data collection.
Once you have focused your evaluation think about the specific items for which you want to collect data and
then determine how you will collect the information you need. There are three primary ways of collecting
observation data. These three methods can be combined to meet your data collection needs. Recording sheets
and checklists are the most standardized way of collecting observation data and include both preset questions
and responses. These forms are typically used for collecting data that can be easily described in advance o
Observation guides list the interactions, processes, or behaviors to be observed with space to record open-
ended narrative data. Field notes are the least standardized way of collecting observation data and do not
include preset questions or responses. Field notes are open-ended narrative data that can be written or dictated

34 | P a g e
onto a tape recorder. Select the sites. Select an adequate number of sites to help ensure they are representative
of the larger population and will provide an understanding of the situation you are observing. Select the
observers. You may choose to be the only observer or you may want to include others in conducting
observations. Stakeholders, other professional staff members, interns and graduate students, and volunteers
are potential observers. Train the observers. It is critical that the observers are well trained in your data
collection process to ensure high quality and consistent data. The level of training will vary based on the
complexity of the data collection and the individual capabilities of the observers. Time your observations
appropriately. Programs and processes typically follow a sequence of events. It is critical that you schedule
your observations so you are observing the components of the activity that will answer your evaluation
questions. This requires advance planning. What are the advantages of observation? Collect data where and
when an event or activity is occurring, does not rely on peoples willingness or ability to provide information.
Allows you to directly see what people do rather than relying on what people say they did. What are the
disadvantages of observation? Susceptible to observers bias. Susceptible to the hawthorne effect, that is,
people usually perform better when they know they are being observed, although indirect observation may
decrease this problem. Can be expensive and time-consuming compared to other data collection methods.
Does not increase your understanding of why people behave as they do.


1. True Designs
2. Quasi Designs
3. Ex Post Facto Designs

True Designs - Five Basic Steps to Experimental Research Design

1. Survey the literature for current research related to your study.

2. Define the problem, formulate a hypothesis, define basic terms and variables, and operationalize
3. Develop a research plan:
a. Identify confounding/mediating variables that may contaminate the experiment, and develop methods
to control or minimize them.
b. Select a research design
c. Randomly select subjects and randomly assign them to groups.
d. Validate all instruments used.
e. Develop data collection procedures, conduct a pilot study, and refine the instrument.
f. State the null and alternative hypotheses and set the statistical significance level of the study.
4. Conduct the research experiment(s).
5. Analyze all data, conduct appropriate statistical tests and report results.

Quasi Designs

The primary difference between true designs and quasi designs is that quasi designs do not use random
assignment into treatment or control groups since this design is used in existing naturally occurring settings.

35 | P a g e
Groups are given pretests, then one group is given a treatment and then both groups are given a post-test. This
creates a continuous question of internal and external validity, since the subjects are self-selected. The steps
used in a quasi-design are the same as true designs.

Ex Post Facto Designs

An ex post facto design will determine which variables discriminate between subject groups.

Steps in an Ex Post Facto Design

1. Formulate the research problem including identification of factors that may influence dependent
2. Identify alternate hypotheses that may explain the relationships.
3. Identify and select subject groups.
4. Collect and analyze data

Ex post facto studies cannot prove causation, but may provide insight into understanding of phenomenon.

Delphi Method

The Delphi method was developed to structure discussions and summarizes options from a selected group to:
avoid meetings, collect information/expertise from individuals spread out over a large geographic area, and
save time through the elimination of direct contact.

Although the data may prove to be valuable, the collection process is very time consuming. When time is
available and respondents are willing to be queried over a period of time, the technique can be very powerful
in identifying trends and predicting future events.

The technique requires a series of questionnaires and feedback reports to a group of individuals. Each series
is analyzed and the instrument/statements are revised to reflect the responses of the group. A new
questionnaire is prepared that includes the new material, and the process is repeated until a consensus is

Types of Surveys

Surveys can be divided into two broad categories: the questionnaire and the interview. Questionnaires are
usually paper-and-pencil instruments that the respondent completes. Interviews are completed by the
interviewer based on the respondent says. Sometimes, it's hard to tell the difference between a questionnaire
and an interview. For instance, some people think that questionnaires always ask short closed-ended
questions while interviews always ask broad open-ended ones. But you will see questionnaires with open-
ended questions (although they do tend to be shorter than in interviews) and there will often be a series of
closed-ended questions asked in an interview.

Survey research has changed dramatically in the last ten years. We have automated telephone surveys that
use random dialing methods. There are computerized kiosks in public places that allows people to ask for
36 | P a g e
input. A whole new variation of group interview has evolved as focus group methodology. Increasingly,
survey research is tightly integrated with the delivery of service. Your hotel room has a survey on the desk.
Your waiter presents a short customer satisfaction survey with your check. You get a call for an interview
several days after your last call to a computer company for technical assistance. You're asked to complete a
short survey when you visit a web site. Here, I'll describe the major types of questionnaires and interviews,
keeping in mind that technology are leading to rapid evolution of methods. We'll discuss the relative
advantages and disadvantages of these different survey types in Advantages and Disadvantages of Survey


When most people think of questionnaires, they think of the mail survey. All of us
have, at one time or another, received a questionnaire in the mail. There are many
advantages to mail surveys. They are relatively inexpensive to administer. You can
send the exact same instrument to a wide number of people. They allow the
respondent to fill it out at their own convenience. But there are some disadvantages as well. Response rates
from mail surveys are often very low. And, mail questionnaires are not the best vehicles for asking for
detailed written responses.

A second type is the group administered questionnaire. A sample of

respondents is brought together and asked to respond to a structured sequence of
questions. Traditionally, questionnaires were administered in group settings for
convenience. The researcher could give the questionnaire to those who were
present and be fairly sure that there would be a high response rate. If the
respondents were unclear about the meaning of a question they could ask for
clarification. And, there were often organizational settings where it was relatively easy to assemble the
group (in a company or business, for instance).

What's the difference between a group administered questionnaire and a group interview or focus group? In
the group administered questionnaire, each respondent is handed an instrument and asked to complete it
while in the room. Each respondent completes an instrument. In the group interview or focus group, the
interviewer facilitates the session. People work as a group, listening to each other's comments and answering
the questions. Someone takes notes for the entire group -- people don't complete an interview individually.

A less familiar type of questionnaire is the household drop-off survey. In this

approach, a researcher goes to the respondent's home or business and hands the
respondent the instrument. In some cases, the respondent is asked to mail it back or
the interview returns to pick it up. This approach attempts to blend the advantages
of the mail survey and the group administered questionnaire. Like the mail survey,
the respondent can work on the instrument in private, when it's convenient. Like
the group administered questionnaire, the interviewer makes personal contact with the respondent -- they
don't just send an impersonal survey instrument. And, the respondent can ask questions about the study and
get clarification on what is to be done. Generally, this would be expected to increase the percent of people
who are willing to respond.

37 | P a g e

Interviews are a far more personal form of research than questionnaires. In the
personal interview, the interviewer works directly with the respondent. Unlike with
mail surveys, the interviewer has the opportunity to probe or ask follow-up questions.
And, interviews are generally easier for the respondent, especially if what is sought is
opinions or impressions. Interviews can be very time consuming and they are resource
intensive. The interviewer is considered a part of the measurement instrument and
interviewers have to be well trained in how to respond to any contingency.

Almost everyone is familiar with the telephone interview. Telephone interviews

enable a researcher to gather information rapidly. Most of the major public opinion
polls that are reported were based on telephone interviews. Like personal
interviews, they allow for some personal contact between the interviewer and the
respondent. And, they allow the interviewer to ask follow-up questions. But they
also have some major disadvantages. Many people don't have publicly-listed
telephone numbers. Some don't have telephones. People often don't like the intrusion of a call to their homes.
And, telephone interviews have to be relatively short or people will feel imposed upon.

Constructing the Survey

Constructing a survey instrument is an art in itself. There are numerous small decisions that must be made --
about content, wording, format, placement -- that can have important consequences for your entire study.
While there's no one perfect way to accomplish this job, we do have lots of advice to offer that might
increase your chances of developing a better final product.

First of all you'll learn about the two major types of surveys that exist, the questionnaire and the interview
and the different varieties of each. Then you'll see how to write questions for surveys. There are three areas
involved in writing a question:

determining the question content, scope and purpose

choosing the response format that you use for collecting information from the respondent
figuring out how to word the question to get at the issue of interest

Finally, once you have your questions written, there is the issue of how best to place them in your survey.

You'll see that although there are many aspects of survey construction that are just common sense, if you are
not careful you can make critical errors that have dramatic effects on your results.

Types of Data

We'll talk about data in lots of places in The Knowledge Base, but here I just want to make a fundamental
distinction between two types of data: qualitative and quantitative. The way we typically define them, we
call data 'quantitative' if it is in numerical form and 'qualitative' if it is not. Notice that qualitative data could
be much more than just words or text. Photographs, videos, sound recordings and so on, can be considered
qualitative data.

38 | P a g e
Personally, while I find the distinction between qualitative and quantitative data to have some utility, I think
most people draw too hard a distinction, and that can lead to all sorts of confusion. In some areas of social
research, the qualitative-quantitative distinction has led to protracted arguments with the proponents of each
arguing the superiority of their kind of data over the other. The quantitative types argue that their data is
'hard', 'rigorous', 'credible', and 'scientific'. The qualitative proponents counter that their data is 'sensitive',
'nuanced', 'detailed', and 'contextual'.

For many of us in social research, this kind of polarized debate has become less than productive. And, it
obscures the fact that qualitative and quantitative data are intimately related to each other. All quantitative
data is based upon qualitative judgments; and all qualitative data can be described and manipulated
numerically. For instance, think about a very common quantitative measure in social research -- a self
esteem scale. The researchers who develop such instruments had to make countless judgments in
constructing them: how to define self esteem; how to distinguish it from other related concepts; how to word
potential scale items; how to make sure the items would be understandable to the intended respondents; what
kinds of contexts it could be used in; what kinds of cultural and language constraints might be present; and
on and on. The researcher who decides to use such a scale in their study has to make another set of
judgments: how well does the scale measure the intended concept; how reliable or consistent is it; how
appropriate is it for the research context and intended respondents; and on and on. Believe it or not, even the
respondents make many judgments when filling out such a scale: what is meant by various terms and
phrases; why is the researcher giving this scale to them; how much energy and effort do they want to expend
to complete it, and so on. Even the consumers and readers of the research will make lots of judgments about
the self esteem measure and its appropriateness in that research context. What may look like a simple,
straightforward, cut-and-dried quantitative measure is actually based on lots of qualitative judgments made
by lots of different people.

On the other hand, all qualitative information can be easily converted into quantitative, and there are many
times when doing so would add considerable value to your research. The simplest way to do this is to divide
the qualitative information into units and number them! I know that sounds trivial, but even that simple
nominal enumeration can enable you to organize and process qualitative information more efficiently.
Perhaps more to the point, we might take text information (say, excerpts from transcripts) and pile these
excerpts into piles of similar statements. When we do something even as easy as this simple grouping or
piling task, we can describe the results quantitatively. For instance, if we had ten statements and we grouped
these into five piles (as shown in the figure), we
could describe the piles using a 10 x 10 table of 0's
and 1's. If two statements were placed together in
the same pile, we would put a 1 in their row-column
juncture. If two statements were placed in different
piles, we would use a 0. The resulting matrix or
table describes the grouping of the ten statements in
terms of their similarity. Even though the data in
this example consists of qualitative statements (one
per card), the result of our simple qualitative
procedure (grouping similar excerpts into the same
piles) is quantitative in nature. "So what?" you ask.
Once we have the data in numerical form, we can manipulate it numerically. For instance, we could have
five different judges sort the 10 excerpts and obtain a 0-1 matrix like this for each judge. Then we could
39 | P a g e
average the five matrices into a single one that shows the proportions of judges who grouped each pair
together. This proportion could be considered an estimate of the similarity (across independent judges) of
the excerpts. While this might not seem too exciting or useful, it is exactly this kind of procedure that I use
as an integral part of the process of developing 'concept maps' of ideas for groups of people (something that
is useful!).

Unit of Analysis

One of the most important ideas in a research project is the unit of analysis. The unit of analysis is the major
entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a

artifacts (books, photos, newspapers)
geographical units (town, census tract, state)
social interactions (dyadic relations, divorces, arrests)

Why is it called the 'unit of analysis' and not something else (like, the unit of sampling)? Because it is the
analysis you do in your study that determines what the unit is. For instance, if you are comparing the
children in two classrooms on achievement test scores, the unit is the individual child because you have a
score for each child. On the other hand, if you are comparing the two classes on classroom climate, your unit
of analysis is the group, in this case the classroom, because you only have a classroom climate score for the
class as a whole and not for each individual student. For different analyses in the same study you may have
different units of analysis. If you decide to base an analysis on student scores, the individual is the unit. But
you might decide to compare average classroom performance. In this case, since the data that goes into the
analysis is the average itself (and not the individuals' scores) the unit of analysis is actually the group. Even
though you had data at the student level, you use aggregates in the analysis. In many areas of social research
these hierarchies of analysis units have become particularly important and have spawned a whole area of
statistical analysis sometimes referred to as hierarchical modeling. This is true in education, for instance,
where we often compare classroom performance but collected achievement data at the individual student


Questionnaires are a popular means of collecting data, but are difficult to design and often require many
rewrites before an acceptable questionnaire is produced.


Can be used as a method in its own right or as a basis for interviewing or a telephone survey.

Can be posted, e-mailed or faxed.

Can cover a large number of people or organizations.

Wide geographic coverage.

40 | P a g e
Relatively cheap.

No prior arrangements are needed.

Avoids embarrassment on the part of the respondent.

Respondent can consider responses.

Possible anonymity of respondent.

No interviewer bias.


Design problems.

Questions have to be relatively simple.

Historically low response rate (although inducements may help).

Time delay whilst waiting for responses to be returned.

Require a return deadline.

Several reminders may be required.

Assumes no literacy problems.

No control over who completes it.

Not possible to give assistance if required.

Problems with incomplete questionnaires.

Replies not spontaneous and independent of each other.

Respondent can read all questions beforehand and then decide whether to complete or not. For
example, perhaps because it is too long, too complex, uninteresting, or too personal.

Design of postal questionnaires

Theme and covering letter

The general theme of the questionnaire should be made explicit in a covering letter. You should state who you
are; why the data is required; give, if necessary, an assurance of confidentiality and/or anonymity; and contact
number and address or telephone number. This ensures that the respondents know what they are committing
themselves to, and also that they understand the context of their replies. If possible, you should offer an
estimate of the completion time. Instructions for return should be included with the return date made obvious.
For example: It would be appreciated if you could return the completed questionnaire by... if at all possible.

41 | P a g e
Instructions for completion

You need to provide clear and unambiguous instructions for completion. Within most questionnaires these are
general instructions and specific instructions for particular question structures. It is usually best to separate
these, supplying the general instructions as a preamble to the questionnaire, but leaving the specific
instructions until the questions to which they apply. The response method should be indicated (circle, tick,
cross, etc.). Wherever possible, and certainly if a slightly unfamiliar response system is employed, you should
give an example.


Appearance is usually the first feature of the questionnaire to which the recipient reacts. A neat and
professional look will encourage further consideration of your request, increasing your response rate. In
addition, careful thought to layout should help your analysis. There are a number of simple rules to help
improve questionnaire appearance:

Liberal spacing makes the reading easier.

Photo-reduction can produce more space without reducing content.

Consistent positioning of response boxes, usually to the right, speeds up completion and also avoids
inadvertent omission of responses.

Choose the font style to maximize legibility.

Differentiate between instructions and questions. Both lower case and capitals can be used, or
responses can be boxed.


There may be a strong temptation to include any vaguely interesting questions, but you should resist this at all
costs. Excessive size can only reduce response rates. If a long questionnaire is necessary, then you must give
even more thought to appearance. It is best to leave pages unnumbered; for respondents to flick to the end and
see page 27 can be very disconcerting!


Probably the most crucial stage in questionnaire response is the beginning. Once the respondents have started
to complete the questions they will normally finish the task, unless it is very long or difficult. Consequently,
you need to select the opening questions with care. Usually the best approach is to ask for biographical details
first, as the respondents should know all the answers without much thought. Another benefit is that an easy
start provides practice in answering questions.

Once the introduction has been achieved the subsequent order will depend on many considerations. You
should be aware of the varying importance of different questions. Essential information should appear early,
just in case the questionnaire is not completed. For the same reasons, relatively unimportant questions can be
placed towards the end. If questions are likely to provoke the respondent and remain unanswered, these too
are best left until the end, in the hope of obtaining answers to everything else.
42 | P a g e

If analysis of the results is to be carried out using a statistical package or spreadsheet it is advisable to code
non-numerical responses when designing the questionnaire, rather than trying to code the responses when they
are returned. An example of coding is:

Male [ ] Female [ ]
1 2

The coded responses (1 or 2) are then used for the analysis.

Thank you

Respondents to questionnaires rarely benefit personally from their efforts and the least the researcher can do
is to thank them. Even though the covering letter will express appreciation for the help given, it is also a nice
gesture to finish the questionnaire with a further thank you.


Keep the questions short, simple and to the point; avoid all unnecessary words.

Use words and phrases that are unambiguous and familiar to the respondent. For example, dinner has
a number of different interpretations; use an alternative expression such as evening meal.

Only ask questions that the respondent can answer. Hypothetical questions should be avoided. Avoid
calculations and questions that require a lot of memory work, for example, How many people stayed
in your hotel last year?

Avoid loaded or leading questions that imply a certain answer. For example, by mentioning one
particular item in the question, Do you agree that Colgate toothpaste is the best toothpaste?

Vacuous words or phrases should be avoided. Generally, usually, or normally are imprecise terms
with various meanings. They should be replaced with quantitative statements, for example, at least
once a week.

Questions should only address a single issue. For example, questions like: Do you take annual
holidays to Spain? should be broken down into two discreet stages, firstly find out if the respondent
takes an annual holiday, and then secondly find out if they go to Spain.

Do not ask two questions in one by using and. For example, Did you watch television last night and
read a newspaper?

Avoid double negatives. For example, Is it not true that you did not read a newspaper yesterday?
Respondents may tackle a double negative by switching both negatives and then assuming that the
same answer applies. This is not necessarily valid.

43 | P a g e
State units required but do not aim for too high a degree of accuracy. For instance, use an interval
rather than an exact figure:

How much did you earn last year?

Less than 10,000 [ ]

10,000 but less than 20,000 [ ]

Avoid emotive or embarrassing words usually connected with race, religion, politics, sex, money.

Types of questions

Closed questions

A question is asked and then a number of possible answers are provided for the respondent. The respondent
selects the answer which is appropriate. Closed questions are particularly useful in obtaining factual

Sex: Male [ ] Female [ ]

Did you watch television last night? Yes [ ] No [ ]

Some Yes/No questions have a third category Do not know. Experience shows that as long as this
alternative is not mentioned people will make a choice. Also the phrase Do not know is ambiguous:

Do you agree with the introduction of the EMU?

Yes [ ] No [ ] Do not know [ ]

What was your main way of travelling to the hotel? Tick one box only.

Car [ ]
Coach [ ]
Motor bike [ ]
Train [ ]

Other means, please specify

With such lists you should always include an other category, because not all possible responses might have
been included in the list of answers.

Sometimes the respondent can select more than one from the list. However, this makes analysis difficult:

Why have you visited the historic house? Tick the relevant answer(s). You may tick as many as you like.

44 | P a g e
I enjoy visiting historic [ ]
The weather was bad and [ ]
I could not enjoy outdoor
I have visited the house [ ]
before and wished to
Other reason, please

Attitude questions

Frequently questions are asked to find out the respondents opinions or attitudes to a given situation. A Likert
scale provides a battery of attitude statements. The respondent then says how much they agree or disagree
with each one:

Read the following statements and then indicate by a tick whether you strongly agree, agree, disagree or
strongly disagree with the statement.

Strongly Agree Disagree Strongly

agree disagree
My visit has
been good
value for

There are many variations on this type of question. One variation is to have a middle statement, for example,
Neither agree nor disagree. However, many respondents take this as the easy option. Only having four
statements, as above, forces the respondent into making a positive or negative choice. Another variation is to
rank the various attitude statements, however, this can cause analysis problems:

Which of these characteristics do you like about your job? Indicate the best three in order, with the best being
number 1.

Varied work [ ]
Good salary [ ]
Opportunities for promotion [ ]
Good working conditions [ ]
High amount of responsibility [ ]
Friendly colleagues [ ]

45 | P a g e
A semantic differential scale attempts to see how strongly an attitude is held by the respondent. With these
scales double-ended terms are given to the respondents who are asked to indicate where their attitude lies on
the scale between the terms. The response can be indicated by putting a cross in a particular position or circling
a number:

Work is: (circle the appropriate number)

Difficult 1 2 3 4 5 6 7 Easy
Useless 1 2 3 4 5 6 7 Useful
Interesting 1 2 3 4 5 6 7 Boring

For summary and analysis purposes, a score of 1 to 7 may be allocated to the seven points of the scale, thus
quantifying the various degrees of opinion expressed. This procedure has some disadvantages. It is implicitly
assumed that two people with the same strength of feeling will mark the same point on the scale. This almost
certainly will not be the case. When faced with a semantic differential scale, some people will never, as a
matter of principle, use the two end indicators of 1 and 7. Effectively, therefore, they are using a five-point
scale. Also scoring the scale 1 to 7 assumes that they represent equidistant points on the continuous spectrum
of opinion. This again is probably not true. Nevertheless, within its limitations, the semantic differential can
provide a useful way of measuring and summarizing subjective opinions.

Other types of questions to determine peoples opinions or attitudes are:

Which one/two words best describes...?

Which of the following statements best describes...?

How much do you agree with the following statement...?

Open questions

An open question such as What are the essential skills a manager should possess? should be used as an
adjunct to the main theme of the questionnaire and could allow the respondent to elaborate upon an earlier
more specific question. Open questions inserted at the end of major sections, or at the end of the questionnaire,
can act as safety valves, and possibly offer additional information. However, they should not be used to
introduce a section since there is a high risk of influencing later responses. The main problem of open
questions is that many different answers have to be summarized and possibly coded.

Testing pilot survey

Questionnaire design is fraught with difficulties and problems. A number of rewrites will be necessary,
together with refinement and rethinks on a regular basis. Do not assume that you will write the questionnaire
accurately and perfectly at the first attempt. If poorly designed, you will collect inappropriate or inaccurate
data and good analysis cannot then rectify the situation.

To refine the questionnaire, you need to conduct a pilot survey. This is a small-scale trial prior to the main
survey that tests all your question planning. Amendments to questions can be made. After making some
46 | P a g e
amendments, the new version would be re-tested. If this re-test produces more changes, another pilot would
be undertaken and so on. For example, perhaps responses to open-ended questions become closed; questions
which are all answered the same way can be omitted; difficult words replaced, etc.

It is usual to pilot the questionnaires personally so that the respondent can be observed and questioned if
necessary. By timing each question, you can identify any questions that appear too difficult, and you can also
obtain a reliable estimate of the anticipated completion time for inclusion in the covering letter. The result can
also be used to test the coding and analytical procedures to be performed later.

Distribution and return

The questionnaire should be checked for completeness to ensure that all pages are present and that none is
blank or illegible. It is usual to supply a prepaid addressed envelope for the return of the questionnaire. You
need to explain this in the covering letter and reinforce it at the end of the questionnaire, after the Thank you.

Finally, many organizations are approached continually for information. Many, as a matter of course, will not
respond in a positive way.

Steps of the research process:

Scientific research involves a systematic process that focuses on being objective and gathering a multitude
of information for analysis so that the researcher can come to a conclusion. This process is used in all
research and evaluation projects, regardless of the research method (scientific method of inquiry,
evaluation research, or action research). The process focuses on testing hunches or ideas in a park and
recreation setting through a systematic process. In this process, the study is documented in such a way that
another individual can conduct the same study again. This is referred to as replicating the study. Any
research done without documenting the study so that others can review the process and results is not an
investigation using the scientific research process. The scientific research process is a multiple-step process
where the steps are interlinked with the other steps in the process. If changes are made in one step of the
process, the researcher must review all the other steps to ensure that the changes are reflected throughout
the process. Parks and recreation professionals are often involved in conducting research or evaluation
projects within the agency. These professionals need to understand the eight steps of the research process
as they apply to conducting a study. Table 2.4 lists the steps of the research process and provides an
example of each step for a sample research study.
Step 1: Identify the Problem
The first step in the process is to identify a problem or develop a research question. The research problem
may be something the agency identifies as a problem, some knowledge or information that is needed by
the agency, or the desire to identify a recreation trend nationally. In the example in table 2.4, the problem
that the agency has identified is childhood obesity, which is a local problem and concern within the
community. This serves as the focus of the study.

47 | P a g e
Step 2: Review the Literature
Now that the problem has been identified, the researcher must learn more about the topic under
investigation. To do this, the researcher must review the literature related to the research problem. This
step provides foundational knowledge about the problem area. The review of literature also educates the
researcher about what studies have been conducted in the past, how these studies were conducted, and the
conclusions in the problem area. In the obesity study, the review of literature enables the programmer to
discover horrifying statistics related to the long-term effects of childhood obesity in terms of health issues,
death rates, and projected medical costs. In addition, the programmer finds several articles and information
from the Centers for Disease Control and Prevention that describe the benefits of walking 10,000 steps a
day. The information discovered during this step helps the programmer fully understand the magnitude of
the problem, recognize the future consequences of obesity, and identify a strategy to combat obesity (i.e.,
Step 3: Clarify the Problem
Many times the initial problem identified in the first step of the process is too large or broad in scope. In
step 3 of the process, the researcher clarifies the problem and narrows the scope of the study. This can only
be done after the literature has been reviewed. The knowledge gained through the review of literature
guides the researcher in clarifying and narrowing the research project. In the example, the programmer has
identified childhood obesity as the problem and the purpose of the study. This topic is very broad and could
be studied based on genetics, family environment, diet, exercise, self-confidence, leisure activities, or
health issues. All of these areas cannot be investigated in a single study; therefore, the problem and purpose
of the study must be more clearly defined. The programmer has decided that the purpose of the study is to
determine if walking 10,000 steps a day for three days a week will improve the individuals health. This
purpose is more narrowly focused and researchable than the original problem.

48 | P a g e
Step 4: Clearly Define Terms and Concepts
Terms and concepts are words or phrases used in the purpose statement of the study or the description of
the study. These items need to be specifically defined as they apply to the study. Terms or concepts often
have different definitions depending on who is reading the study. To minimize confusion about what the
terms and phrases mean, the researcher must specifically define them for the study. In the obesity study,
the concept of individuals health can be defined in hundreds of ways, such as physical, mental,
emotional, or spiritual health. For this study, the individuals health is defined as physical health. The
concept of physical health may also be defined and measured in many ways. In this case, the programmer
decides to more narrowly define individual health to refer to the areas of weight, percentage of body fat,
and cholesterol. By defining the terms or concepts more narrowly, the scope of the study is more
manageable for the programmer, making it easier to collect the necessary data for the study. This also
makes the concepts more understandable to the reader.
Step 5: Define the Population
Research projects can focus on a specific group of people, facilities, park development, employee
evaluations, programs, financial status, marketing efforts, or the integration of technology into the
operations. For example, if a researcher wants to examine a specific group of people in the community, the
study could examine a specific age group, males or females, people living in a specific geographic area, or
a specific ethnic group. Literally thousands of options are available to the researcher to specifically identify
the group to study. The research problem and the purpose of the study assist the researcher in identifying
the group to involve in the study. In research terms, the group to involve in the study is always called the
population. Defining the population assists the researcher in several ways. First, it narrows the scope of the
study from a very large population to one that is manageable. Second, the population identifies the group
that the researchers efforts will be focused on within the study. This helps ensure that the researcher stays
on the right path during the study. Finally, by defining the population, the researcher identifies the group
that the results will apply to at the conclusion of the study. In the example in table 2.4, the programmer has
identified the population of the study as children ages 10 to 12 years. This narrower population makes the
study more manageable in terms of time and resources.
Step 6: Develop the Instrumentation Plan
The plan for the study is referred to as the instrumentation plan. The instrumentation plan serves as the
road map for the entire study, specifying who will participate in the study; how, when, and where data will
be collected; and the content of the program. This plan is composed of numerous decisions and
considerations that are addressed in chapter 8 of this text. In the obesity study, the researcher has decided
to have the children participate in a walking program for six months. The group of participants is called
the sample, which is a smaller group selected from the population specified for the study. The study cannot
possibly include every 10- to 12-year-old child in the community, so a smaller group is used to represent
the population. The researcher develops the plan for the walking program, indicating what data will be
collected, when and how the data will be collected, who will collect the data, and how the data will be
analyzed. The instrumentation plan specifies all the steps that must be completed for the study. This ensures
that the programmer has carefully thought through all these decisions and that she provides a step-by-step
plan to be followed in the study.

49 | P a g e
Step 7: Collect Data
Once the instrumentation plan is completed, the actual study begins with the collection of data. The
collection of data is a critical step in providing the information needed to answer the research question.
Every study includes the collection of some type of datawhether it is from the literature or from
subjectsto answer the research question. Data can be collected in the form of words on a survey, with a
questionnaire, through observations, or from the literature. In the obesity study, the programmers will be
collecting data on the defined variables: weight, percentage of body fat, cholesterol levels, and the number
of days the person walked a total of 10,000 steps during the class.
The researcher collects these data at the first session and at the last session of the program. These two sets
of data are necessary to determine the effect of the walking program on weight, body fat, and cholesterol
level. Once the data are collected on the variables, the researcher is ready to move to the final step of the
process, which is the data analysis.
Step 8: Analyze the Data
All the time, effort, and resources dedicated to steps 1 through 7 of the research process culminate in this
final step. The researcher finally has data to analyze so that the research question can be answered. In the
instrumentation plan, the researcher specified how the data will be analyzed. The researcher now analyzes
the data according to the plan. The results of this analysis are then reviewed and summarized in a manner
directly related to the research questions. In the obesity study, the researcher compares the measurements
of weight, percentage of body fat, and cholesterol that were taken at the first meeting of the subjects to the
measurements of the same variables at the final program session. These two sets of data will be analyzed
to determine if there was a difference between the first measurement and the second measurement for each
individual in the program. Then, the data will be analyzed to determine if the differences are statistically
significant. If the differences are statistically significant, the study validates the theory that was the focus
of the study. The results of the study also provide valuable information about one strategy to combat
childhood obesity in the community.

As you have probably concluded, conducting studies using the eight steps of the scientific research process
requires you to dedicate time and effort to the planning process. You cannot conduct a study using the scientific
research process when time is limited or the study is done at the last minute. Researchers who do this conduct
studies that result in either false conclusions or conclusions that are not of any value to the organization.

Modes of Classification
There are four types of classification, viz., (i) qualitative; (ii) quantitative; (iii) temporal and (iv) spatial.
(i) Qualitative classification: It is done according to attributes or non-measurable characteristics; like social
status, sex, nationality, occupation, etc. For example, the population of the whole country can be classified
into four categories as married, unmarried, widowed and divorced. When only one attribute, e.g., sex, is
used for classification, it is called simple classification. When more than one attributes, e.g., deafness, sex
and religion, are used for classification, it is called manifold classification.
(ii) Quantitative classification: It is done according to numerical size like weights in kg or heights in cm.
Here we classify the data by assigning arbitrary limits known as class-limits.
The quantitative phenomenon under study is called a variable. For example, the population of the whole
country may be classified according to different variables like age, income, wage, price, etc. Hence this
classification is often called classification by variables.

50 | P a g e
(a) Variable: A variable in statistics means any measurable characteristic or quantity which can assume a
range of numerical values within certain limits, e.g., income, height, age, weight, wage, price, etc. A
variable can be classified as either discrete or continuous.
(1) Discrete variable: A variable which can take up only exact values and not any fractional values, is
called a discrete variable. Number of workmen in a factory, members of a family, students in a class,
number of births in a certain year, number of telephone calls in a month, etc., are examples of discrete-
(2) Continuous variable: A variable which can take up any numerical value (integral/fractional) within a
certain range is called a continuous variable. Height, weight, rainfall, time, temperature, etc., are examples
of continuous variables. Age of students in a school is a continuous variable as it can be measured to the
nearest fraction of time, i.e., years, months, days, etc.
(iii) Temporal classification: It is done according to time, e.g., index numbers arranged over a period of
time, population of a country for several decades, exports and imports of India for different five year plans,
(iv) Spatial classification: It is done with respect to space or places, e.g., production of cereals in quintals in
various states, population of a country according to states, etc.

Presentation of statistical data

Statistical data can be presented in three different ways: (a) Textual presentation, (b) Tabular presentation,
and (c) Graphical presentation.
(a) Textual presentation: This is a descriptive form. The following is an example of such a presentation of
data about deaths from industrial diseases in Great Britain in 193539 and 1940-44.
Example 1.3. Numerical data with regard to industrial diseases and deaths there form in Great Britain
during the years 193539 and 194044 are given in a descriptive form:
During the quinquennium 193539, there were in Great Britain 1, 775 cases of industrial diseases made up
of 677 cases of lead poisoning, 111 of other poisoning, 144 of anthrax, and 843 of gassing. The number of
deaths reported was 20 p.c. of the cases for all the four diseases taken together, that for lead poisoning was
135, for other poisoning 25 and that for anthrax was 30. During the next quinquennium, 194044, the total
number of cases reported was 2, 807. But lead poisoning cases reported fell by 351 and anthrax cases by 35.
Other poisoning cases increased by 784 between the two periods. The number of deaths reported decreased
by 45 for lead poisoning, but decreased only by 2 for anthrax from the pre-war to the post-war
quinquennium. In the later period, 52 deaths were reported for poisoning other than lead poisoning. The total
number of deaths reported in 194044 including those from gassing was 64 greater than in 193539.
The disadvantages of textual presentation are: (i) it is too lengthy; (ii) there is repetition of words; (iii)
comparisons cannot be made easily; (iv) it is difficult to get an idea and take appropriate action.
(b) Tabular presentation, or, Tabulation
Tabulation may be defined as the systematic presentation of numerical data in rows or/and columns
according to certain characteristics. It expresses the data in concise and attractive form which can be easily
understood and used to compare numerical figures. Before drafting a table, you should be sure what you
want to show and who will be the reader.
The advantages of a tabular presentation over the textual presentation are: (i) it is concise;
(ii) There is no repetition of explanatory matter; (iii) comparisons can be made easily; (iv) the important
features can be highlighted; and (v) errors in the data can be detected.
An ideal statistical table should contain the following items:
(i) Table number: A number must be allotted to the table for identification, particularly when there are
many tables in a study.
51 | P a g e
(ii) Title: The title should explain what is contained in the table. It should be clear, brief and set in bold type
on top of the table. It should also indicate the time and place to which the data refer.
(iii) Date: The date of preparation of the table should be given.
(iv) Stubs, or, Row designations: Each row of the table should be given a brief heading. Such designations
of rows are called stubs, or, stub items and the entire column is called stub column.
(v) Column headings, or, Captions: Column designation is given on top of each column to explain to what
the figures in the column refer. It should be clear and precise. This is called a caption, or, heading.
Columns should be numbered if there are four, or, more columns.
(vi) Body of the table: The data should be arranged in such a way that any figure can be located easily.
Various types of numerical variables should be arranged in an ascending order, i.e., from left to right in rows
and from top to bottom in columns. Column and row totals should be given.
(vii) Unit of measurement: If the unit of measurement is uniform throughout the table, it is stated at the top
right-hand corner of the table along with the title. If different rows and columns contain figures in different
units, the units may be stated along with stubs, or, captions. Very large figures may be rounded up but
the method of rounding should be explained.
(viii) Source: At the bottom of the table a note should be added indicating the primary and secondary
sources from which data have been collected.
(ix) Footnotes and references: If any item has not been explained properly, a separate explanatory note
should be added at the bottom of the table.
A table should be logical, well-balanced in length and breadth and the comparable columns should be placed
side by side. Light/heavy/thick or double rulings may be used to distinguish sub columns, main columns and
totals. For large data more than one table may be used.


Research report is considered a major component of the research study for the research task remains
incomplete till the report has been presented and/or written. As a matter of fact even the most brilliant
hypothesis, highly well designed and conducted research study, and the most striking generalizations and
findings are of little value unless they are effectively communicated to others.
The purpose of research is not well served unless the findings are made known to others. Research results
must invariably enter the general store of knowledge. All this explains the significance of writing research
report. There are people who do not consider writing of report as an integral part of the research process. But
the general opinion is in favour of treating the presentation of research results or the writing of report as part
and parcel of the research project. Writing of report is the last step in a research study and requires a set of
skills somewhat different from those called for in respect of the earlier stages of research. This task should
be accomplished by the researcher with utmost care; he may seek the assistance and guidance of experts for
the purpose.
Research reports are the product of slow, painstaking, accurate inductive work. The usual steps involved in
writing report are: (a) logical analysis of the subject-matter; (b) preparation of the final outline;
(c) Preparation of the rough draft; (d) rewriting and polishing; (c) preparation of the final bibliography; and
(f) writing the final draft. Though all these steps are self-explanatory, yet a brief mention of each one of
these will be appropriate for better understanding.
Logical analysis of the subject matter: It is the first step which is primarily concerned with the development
of a subject. There are two ways in which to develop a subject (a) logically and
(b) Chronologically. The logical development is made on the basis of mental connections and associations
between the one thing and another by means of analysis. Logical treatment often consists in developing the
52 | P a g e
material from the simple possible to the most complex structures. Chronological development is based on a
connection or sequence in time or occurrence. The directions for doing or making something usually follow
the chronological order.
Preparation of the final outline: It is the next step in writing the research report Outlines are the framework
upon which long written works are constructed. They are an aid to the logical organization of the material
and a reminder of the points to be stressed in the report.
Preparation of the rough draft: This follows the logical analysis of the subject and the preparation of the
final outline. Such a step is of utmost importance for the researcher now sits to write down what he has done
in the context of his research study. He will write down the procedure adopted by him in collecting the
material for his study along with various limitations faced by him, the technique of analysis adopted by him,
the broad findings and generalizations and the various suggestions he wants to offer regarding the problem
Rewriting and polishing of the rough draft: This step happens to be most difficult part of all formal writing.
Usually this step requires more time than the writing of the rough draft. The careful revision makes the
difference between a mediocre and a good piece of writing. While rewriting and polishing, one should check
the report for weaknesses in logical development or presentation. The researcher should also see whether or
not the material, as it is presented, has unity and cohesion; does the report stand upright and firm and exhibit
a definite pattern, like a marble arch? Or does it resemble an old wall of moldering cement and loose brick.
In addition the researcher should give due attention to the fact that in his rough draft he has been consistent
or not. He should check the mechanics of writinggrammar, spelling and usage.
Preparation of the final bibliography: Next in order comes the task of the preparation of the final
bibliography. The bibliography, which is generally appended to the research report, is a list of books in some
way pertinent to the research which has been done. It should contain all those works which the researcher
has consulted. The bibliography should be arranged alphabetically and may be divided into two parts; the
first part may contain the names of books and pamphlets, and the second part may contain the names of
magazine and newspaper articles. Generally, this pattern of bibliography is considered convenient and
satisfactory from the point of view of reader, though it is not the only way of presenting bibliography. The
entries in bibliography should be made adopting the following order:
For books and pamphlets the order may be as under:
1. Name of author, last name first.
2. Title, underlined to indicate italics.
3. Place, publisher, and date of publication.
4. Number of volumes.
Kothari, C.R., Quantitative Techniques, New Delhi, Vikas Publishing House Pvt. Ltd., 1978.
For magazines and newspapers the order may be as under:
1. Name of the author, last name first.
2. Title of article, in quotation marks.
3. Name of periodical, underlined to indicate italics.
4. The volume or volume and number.
5. The date of the issue.
6. The pagination.
Robert V. Roosa, Coping with Short-term International Money Flows, The Banker, London,
September, 1971, p. 995.

53 | P a g e
The above examples are just the samples for bibliography entries and may be used, but one should also
remember that they are not the only acceptable forms. The only thing important is that, whatever method
one selects, it must remain consistent.
Writing the final draft: This constitutes the last step. The final draft should be written in a concise and
objective style and in simple language, avoiding vague expressions such as it seems, there may be, and
the like ones. While writing the final draft, the researcher must avoid abstract terminology and technical
jargon. Illustrations and examples based on common experiences must be incorporated in the final draft as
they happen to be most effective in communicating the research findings to others. A research report should
not be dull, but must enthuse people and maintain interest and must show originality. It must be remembered
that every report should be an attempt to solve some intellectual problem and must contribute to the solution
of a problem and must add to the knowledge of both the researcher and the reader.
Anybody, who is reading the research report, must necessarily be conveyed enough about the study so that
he can place it in its general scientific context, judge the adequacy of its methods and thus form an opinion
of how seriously the findings are to be taken. For this purpose there is the need of proper layout of the
report. The layout of the report means as to what the research report should contain. A comprehensive layout
of the research report should comprise (A) preliminary pages; (B) the main text; and (C) the end matter. Let
us deal with them separately.
(A) Preliminary Pages
In its preliminary pages the report should carry a title and date, followed by acknowledgements in the form
of Preface or Foreword. Then there should be a table of contents followed by list of tables and
illustrations so that the decision-maker or anybody interested in reading the report can easily locate the
required information in the report.
(B) Main Text
The main text provides the complete outline of the research report along with all details. Title of the research
study is repeated at the top of the first page of the main text and then follows the other details on pages
numbered consecutively, beginning with the second page. Each main section of the report should begin on a
new page. The main text of the report should have the following sections:
(i) Introduction; (ii) Statement of findings and recommendations; (iii) The results; (iv) The implications
drawn from the results; and (v) The summary.
(i) Introduction: The purpose of introduction is to introduce the research project to the readers. It should
contain a clear statement of the objectives of research i.e., enough background should be given to make clear
to the reader why the problem was considered worth investigating. A brief summary of other relevant
research may also be stated so that the present study can be seen in that context. The hypotheses of study, if
any, and the definitions of the major concepts employed in the study should be explicitly stated in the
introduction of the report.
The methodology adopted in conducting the study must be fully explained. The scientific reader would like
to know in detail about such thing: How was the study carried out? What was its basic design? If the study
was an experimental one, then what were the experimental manipulations? If the data were collected by
means of questionnaires or interviews, then exactly what questions were asked (The questionnaire or
interview schedule is usually given in an appendix)? If measurements were based on observation, then what
instructions were given to the observers? Regarding the sample used in the study the reader should be told:
Who were the subjects? How many were there? How were they selected? All these questions are crucial for
estimating the probable limits of generalizability of the findings. The statistical analysis adopted must also
be clearly stated. In addition to all this, the scope of the study should be stated and the boundary lines be
demarcated. The various limitations, under which the research project was completed, must also be narrated.
54 | P a g e
(ii) Statement of findings and recommendations: After introduction, the research report must contain a
statement of findings and recommendations in non-technical language so that it can be easily understood by
all concerned. If the findings happen to be extensive, at this point they should be put in the summarized
(iii) Results: A detailed presentation of the findings of the study, with supporting data in the form of tables
and charts together with a validation of results, is the next step in writing the main text of the report. This
generally comprises the main body of the report, extending over several chapters. The result section of the
report should contain statistical summaries and reductions of the data rather than the raw data. All the results
should be presented in logical sequence and splitted into readily identifiable sections. All relevant results
must find a place in the report. But how one is to decide about what is relevant is the basic question. Quite
often guidance comes primarily from the research problem and from the hypotheses, if any, with which the
study was concerned. But ultimately the researcher must rely on his own judgment in deciding the outline of
his report. Nevertheless, it is still necessary that he states clearly the problem with which he was concerned,
the procedure by which he worked on the problem, the conclusions at which he arrived, and the bases for his
(iv) Implications of the results: Toward the end of the main text, the researcher should again put down the
results of his research clearly and precisely. He should, state the implications that flow from the results of
the study, for the general reader is interested in the implications for understanding the human behaviour.
Such implications may have three aspects as stated below:
(a) A statement of the inferences drawn from the present study which may be expected to apply in similar
(b) The conditions of the present study which may limit the extent of legitimate generalizations of the
inferences drawn from the study.
(c) The relevant questions that still remain unanswered or new questions raised by the study along with
suggestions for the kind of research that would provide answers for them.
It is considered a good practice to finish the report with a short conclusion which summarizes and
recapitulates the main points of the study. The conclusion drawn from the study should be clearly related to
the hypotheses that were stated in the introductory section. At the same time, a forecast of the probable
future of the subject and an indication of the kind of research which needs to be done in that particular field
is useful and desirable.
(v) Summary: It has become customary to conclude the research report with a very brief summary, resting in
brief the research problem, the methodology, the major findings and the major conclusions drawn from the
research results.
(C) End Matter
At the end of the report, appendices should be enlisted in respect of all technical data such as questionnaires,
sample information, mathematical derivations and the like ones. Bibliography of sources consulted should
also be given. Index (an alphabetical listing of names, places and topics along with the numbers of the pages
in a book or report on which they are mentioned or discussed) should invariably be given at the end of the
report. The value of index lies in the fact that it works as a guide to the reader for the contents in the report.
Research reports vary greatly in length and type. In each individual case, both the length and the form are
largely dictated by the problems at hand. For instance, business firms prefer reports in the letter form, just
one or two pages in length. Banks, insurance organizations and financial institutions are generally fond of
the short balance-sheet type of tabulation for their annual reports to their customers and shareholders.
Mathematicians prefer to write the results of their investigations in the form of algebraic notations. Chemists
report their results in symbols and formulae. Students of literature usually write long reports presenting the
55 | P a g e
critical analysis of some writer or period or the like with a liberal use of quotations from the works of the
author under discussion. In the field of education and psychology, the favourite form is the report on the
results of experimentation accompanied by the detailed statistical tabulations. Clinical psychologists and
social pathologists frequently find it necessary to make use of the case-history form.
News items in the daily papers are also forms of report writing. They represent firsthand on-the scene
accounts of the events described or compilations of interviews with persons who were on the scene. In such
reports the first paragraph usually contains the important information in detail and the succeeding
paragraphs contain material which is progressively less and less important.
Book-reviews which analyze the content of the book and report on the authors intentions, his success or
failure in achieving his aims, his language, his style, scholarship, bias or his point of view.
Such reviews also happen to be a kind of short report. The reports prepared by governmental bureaus,
special commissions, and similar other organizations are generally very comprehensive reports on the issues
involved. Such reports are usually considered as important research products. Similarly, Ph.D. theses and
dissertations are also a form of report-writing, usually completed by students in academic institutions.
The above narration throws light on the fact that the results of a research investigation can be presented in a
number of ways viz., a technical report, a popular report, an article, a monograph or at times even in the
form of oral presentation. Which method(s) of presentation to be used in a particular study depends on the
circumstances under which the study arose and the nature of the results. A technical report is used whenever
a full written report of the study is required whether for recordkeeping or for public dissemination. A
popular report is used if the research results have policy implications. We give below a few details about the
said two types of reports:
(A) Technical Report
In the technical report the main emphasis is on (i) the methods employed, (it) assumptions made in the
course of the study, (iii) the detailed presentation of the findings including their limitations and supporting
A general outline of a technical report can be as follows:
1. Summary of results: A brief review of the main findings just in two or three pages.
2. Nature of the study: Description of the general objectives of study, formulation of the problem in
operational terms, the working hypothesis, the type of analysis and data required, etc.
3. Methods employed: Specific methods used in the study and their limitations. For instance, in sampling
studies we should give details of sample design viz., sample size, sample selection, etc.
4. Data: Discussion of data collected, their sources, characteristics and limitations. If secondary data are
used, their suitability to the problem at hand be fully assessed. In case of a survey, the manner in which data
were collected should be fully described.
5. Analysis of data and presentation of findings: The analysis of data and presentation of the findings of the
study with supporting data in the form of tables and charts be fully narrated. This, in fact, happens to be the
main body of the report usually extending over several chapters.
6. Conclusions: A detailed summary of the findings and the policy implications drawn from the results be
7. Bibliography: Bibliography of various sources consulted be prepared and attached.
8. Technical appendices: Appendices be given for all technical matters relating to questionnaire,
mathematical derivations, elaboration on particular technique of analysis and the like ones.
9. Index: Index must be prepared and be given invariably in the report at the end.
The order presented above only gives a general idea of the nature of a technical report; the order of
presentation may not necessarily be the same in all the technical reports. This, in other words, means that the

56 | P a g e
presentation may vary in different reports; even the different sections outlined above will not always be the
same, nor will all these sections appear in any particular report.
It should, however, be remembered that even in a technical report, simple presentation and ready availability
of the findings remain an important consideration and as such the liberal use of charts and diagrams is
considered desirable.
(B) Popular Report
The popular report is one which gives emphasis on simplicity and attractiveness. The simplification should
be sought through clear writing, minimization of technical, particularly mathematical, details and liberal use
of charts and diagrams. Attractive layout along with large print, many subheadings, even an occasional
cartoon now and then is another characteristic feature of the popular report.
Besides, in such a report emphasis is given on practical aspects and policy implications.
We give below a general outline of a popular report.
1. The findings and their implications: Emphasis in the report is given on the findings of most practical
interest and on the implications of these findings.
2. Recommendations for action: Recommendations for action on the basis of the findings of the study is
made in this section of the report.
3. Objective of the study: A general review of how the problem arise is presented along with the specific
objectives of the project under study.
4. Methods employed: A brief and non-technical description of the methods and techniques used, including a
short review of the data on which the study is based, is given in this part of the report.
5. Results: This section constitutes the main body of the report wherein the results of the study are presented
in clear and non-technical terms with liberal use of all sorts of illustrations such as charts, diagrams and the
like ones.
6. Technical appendices: More detailed information on methods used, forms, etc. is presented in the form of
appendices. But the appendices are often not detailed if the report is entirely meant for general public.
There can be several variations of the form in which a popular report can be prepared. The only important
thing about such a report is that it gives emphasis on simplicity and policy implications from the operational
point of view, avoiding the technical details of all sorts to the extent possible.
At times oral presentation of the results of the study is considered effective, particularly in cases where
policy recommendations are indicated by project results. The merit of this approach lies in the fact that it
provides an opportunity for give-and-take decisions which generally lead to a better understanding of the
findings and their implications. But the main demerit of this sort of presentation is the lack of any permanent
record concerning the research details and it may be just possible that the findings may fade away from
peoples memory even before an action is taken. In order to overcome this difficulty, a written report may be
circulated before the oral presentation and referred to frequently during the discussion. Oral presentation is
effective when supplemented by various visual devices. Use of slides, wall charts and blackboards is quite
helpful in contributing to clarity and in reducing the boredom, if any. Distributing a board outline, with a
few important tables and charts concerning the research results, makes the listeners attentive who have a
ready outline on which to focus their thinking. This very often happens in academic institutions where the
researcher discusses his research findings and policy implications with others either in a seminar or in a
group discussion.
Thus, research results can be reported in more than one ways, but the usual practice adopted, in academic
institutions particularly, is that of writing the Technical Report and then preparing several research papers to
be discussed at various forums in one form or the other. But in practical field and with problems having
policy implications, the technique followed is that of writing a popular report.
57 | P a g e
Researches done on governmental account or on behalf of some major public or private organizations are
usually presented in the form of technical reports.
There are very definite and set rules which should be followed in the actual preparation of the research
report or paper. Once the techniques are finally decided, they should be scrupulously adhered to, and no
deviation permitted. The criteria of format should be decided as soon as the materials for the research paper
have been assembled. The following points deserve mention so far as the mechanics of writing a report are
1. Size and physical design: The manuscript should be written on un-ruled paper 8 1 2 11 in size. If it is
to be written by hand, then black or blue-black ink should be used. A margin of at least one and one-half
inches should be allowed at the left hand and of at least half an inch at the right hand of the paper. There
should also be one-inch margins, top and bottom. The paper should be neat and legible. If the manuscript is
to be typed, then all typing should be double-spaced on one side of the page only except for the insertion of
the long quotations.
2. Procedure: Various steps in writing the report should be strictly adhered (All such steps have already
been explained earlier in this chapter).
3. Layout: Keeping in view the objective and nature of the problem, the layout of the report should be
thought of and decided and accordingly adopted (The layout of the research report and various types of
reports have been described in this chapter earlier which should be taken as a guide for report-writing in case
of a particular problem).
4. Treatment of quotations: Quotations should be placed in quotation marks and double spaced, forming an
immediate part of the text. But if a quotation is of a considerable length (more than four or five type written
lines) then it should be single-spaced and indented at least half an inch to the right of the normal text margin.
5. The footnotes: Regarding footnotes one should keep in view the followings:
(a) The footnotes serve two purposes viz., the identification of materials used in quotations in the report and
the notice of materials not immediately necessary to the body of the research text but still of supplemental
value. In other words, footnotes are meant for cross references, citation of authorities and sources,
acknowledgement and elucidation or explanation of a point of view. It should always be kept in view that
footnote is not an end nor a means of the display of scholarship. The modern tendency is to make the
minimum use of footnotes for scholarship does not need to be displayed.
(b) Footnotes are placed at the bottom of the page on which the reference or quotation which they identify or
supplement ends. Footnotes are customarily separated from the textual material by a space of half an inch
and a line about one and a half inches long.
(c) Footnotes should be numbered consecutively, usually beginning with 1 in each chapter separately. The
number should be put slightly above the line, say at the end of a quotation.
At the foot of the page, again, the footnote number should be indented and typed a little above the line.
Thus, consecutive numbers must be used to correlate the reference in the text with its corresponding note at
the bottom of the page, except in case of statistical tables and other numerical material, where symbols such
as the asterisk (*) or the like one may be used to prevent confusion.
(d) Footnotes are always typed in single space though they are divided from one another by double space.
6. Documentation style: Regarding documentation, the first footnote reference to any given work should be
complete in its documentation, giving all the essential facts about the edition used. Such documentary
footnotes follow a general sequence. The common order may be described as under:
(i) Regarding the single-volume reference
1. Authors name in normal order (and not beginning with the last name as in a bibliography) followed by a
58 | P a g e
2. Title of work, underlined to indicate italics;
3. Place and date of publication;
4. Pagination references (The page number).
John Gassner, Masters of the Drama, New York: Dover Publications, Inc. 1954, p. 315.
(ii) Regarding multivolume reference
1. Authors name in the normal order;
2. Title of work, underlined to indicate italics;
3. Place and date of publication;
4. Number of volume;
5. Pagination references (The page number).
(iii) Regarding works arranged alphabetically
For works arranged alphabetically such as encyclopedias and dictionaries, no pagination reference is usually
needed. In such cases the order is illustrated as under:
Example 1
Salamanca, Encyclopaedia Britannica, 14th Edition.
Example 2
Mary Wollstonecraft Godwin, Dictionary of national biography.
But if there should be a detailed reference to a long encyclopedia article, volume and pagination reference
may be found necessary.
(iv) Regarding periodicals reference
1. Name of the author in normal order;
2. Title of article, in quotation marks;
3. Name of periodical, underlined to indicate italics;
4. Volume number;
5. Date of issuance;
6. Pagination.
(v) Regarding anthologies and collections reference
Quotations from anthologies or collections of literary works must be acknowledged not only by author, but
also by the name of the collector.
(vi) Regarding second-hand quotations reference
In such cases the documentation should be handled as follows:
1. Original author and title;
2. quoted or cited in,;
3. Second author and work.
J.F. Jones, Life in Ploynesia, p. 16, quoted in History of the Pacific Ocean area, by R.B. Abel,
p. 191.
(vii) Case of multiple authorship
If there are more than two authors or editors, then in the documentation the name of only the first is given
and the multiple authorship is indicated by et al. or and others.
Subsequent references to the same work need not be as detailed as stated above. If the work is cited again
without any other work intervening, it may be indicated as ibid, followed by a comma and the page number.
A single page should be referred to as p., but more than one page be referred to as pp. If there are several
pages referred to at a stretch, the practice is to use often the page number, for example, pp. 190ff, which
means page number 190 and the following pages; but only for page 190 and the following page 190f.
59 | P a g e
Roman numerical is generally used to indicate the number of the volume of a book. Op. cit. (opera citato, in
the work cited) or Loc. cit. (loco citato, in the place cited) are two of the very convenient abbreviations used
in the footnotes. Op. cit. or Loc. cit. after the writers name would suggest that the reference is to work by
the writer which has been cited in detail in an earlier footnote but intervened by some other references.
7. Punctuation and abbreviations in footnotes: The first item after the number in the footnote is the authors
name, given in the normal signature order. This is followed by a comma. After the comma, the title of the
book is given: the article (such as A, An, The etc.) is omitted and only the first word and proper nouns
and adjectives are capitalized. The title is followed by a comma.
Information concerning the edition is given next. This entry is followed by a comma. The place of
publication is then stated; it may be mentioned in an abbreviated form, if the place happens to be a famous
one such as Lond. for London, N.Y. for New York, N.D. for New Delhi and so on. This entry is followed by
a comma. Then the name of the publisher is mentioned and this entry is closed by a comma. It is followed
by the date of publication if the date is given on the title page. If the date appears in the copyright notice on
the reverse side of the title page or elsewhere in the volume, the comma should be omitted and the date
enclosed in square brackets [c 1978], [1978]. The entry is followed by a comma. Then follow the volume
and page references and are separated by a comma if both are given. A period closes the complete
documentary reference. But one should remember that the documentation regarding acknowledgements
from magazine articles and periodical literature follow a different form as stated earlier while explaining the
entries in the bibliography.
Certain English and Latin abbreviations are quite often used in bibliographies and footnotes to eliminate
tedious repetition. The following is a partial list of the most common abbreviations frequently used in
report-writing (the researcher should learn to recognize them as well as he should learn to use them):
anon., anonymous
ante., before
art., article
aug., augmented
bk., book
bull., bulletin
cf., compare
ch., chapter
col., column
diss., dissertation
ed., editor, edition, edited.
ed. cit., edition cited
e.g., exempli gratia: for example
eng., enlarged, and others
Interpretation and Report Writing 357
et seq., et sequens: and the following
ex., example
f., ff., and the following
fig(s)., figure(s)
fn., footnote
ibid., ibidem: in the same place (when two or more successive footnotes refer to the
same work, it is not necessary to repeat complete reference for the second
footnote. Ibid. may be used. If different pages are referred to, pagination
60 | P a g e
must be shown).
id., idem: the same
ill., illus., or
illust(s). illustrated, illustration(s)
Intro., intro., introduction
l, or ll, line(s)
loc. cit., in the place cited; used as op.cit., (when new reference
loco citato: is made to the same pagination as cited in the previous note)
MS., MSS., Manuscript or Manuscripts
N.B., nota bene: note well
n.d., no date
n.p., no place
no pub., no publisher
no(s)., number(s)
o.p., out of print
op. cit: in the work cited (If reference has been made to a work
opera citato and new reference is to be made, ibid., may be used, if intervening
reference has been made to different works, op.cit. must be used. The
name of the author must precede.
p. or pp., page(s)
passim: here and there
post: after
rev., revised
tr., trans., translator, translated, translation
vid or vide: see, refer to
viz., namely
vol. or vol(s)., volume(s)
vs., versus: against
8. Use of statistics, charts and graphs: A judicious use of statistics in research reports is often considered a
virtue for it contributes a great deal towards the clarification and simplification of the material and research
results. One may well remember that a good picture is often worth more than a thousand words. Statistics
are usually presented in the form of tables, charts, bars and line-graphs and pictograms. Such presentation
should be self-explanatory and complete in itself. It should be suitable and appropriate looking to the
problem at hand. Finally, statistical presentation should be neat and attractive.
9. The final draft: Revising and rewriting the rough draft of the report should be done with great care before
writing the final draft. For the purpose, the researcher should put to himself questions like: Are the sentences
written in the report clear? Are they grammatically correct? Do they say what is meant? Do the various
points incorporated in the report fit together logically? Having at least one colleague read the report just
before the final revision is extremely helpful. Sentences that seem crystal-clear to the writer may prove quite
confusing to other people; a connection that had seemed self-evident may strike others as a non-sequitur. A
friendly critic, by pointing out passages that seem unclear or illogical, and perhaps suggesting ways of
remedying the difficulties, can be an invaluable aid in achieving the goal of adequate communication.6
10. Bibliography: Bibliography should be prepared and appended to the research report as discussed earlier.
11. Preparation of the index: At the end of the report, an index should invariably be given, the value of
which lies in the fact that it acts as a good guide, to the reader. Index may be prepared both as subject index
and as author index. The former gives the names of the subject-topics or concepts along with the number of
61 | P a g e
pages on which they have appeared or discussed in the report, whereas the latter gives the similar
information regarding the names of authors. The index should always be arranged alphabetically. Some
people prefer to prepare only one index common for names of authors, subject-topics, concepts and the like
Research report is a channel of communicating the research findings to the readers of the report. A good
research report is one which does this task efficiently and effectively. As such it must be prepared keeping
the following precautions in view:
1. While determining the length of the report (since research reports vary greatly in length), one should keep
in view the fact that it should be long enough to cover the subject but short enough to maintain interest. In
fact, report-writing should not be a means to learning more and more about less and less.
2. A research report should not, if this can be avoided, be dull; it should be such as to sustain readers
3. Abstract terminology and technical jargon should be avoided in a research report. The report should be
able to convey the matter as simply as possible. This, in other words, means that report should be written in
an objective style in simple language, avoiding expressions such as it seems, there may be and the like.
4. Readers are often interested in acquiring a quick knowledge of the main findings and as such the report
must provide a ready availability of the findings. For this purpose, charts, graphs and the statistical tables
may be used for the various results in the main report in addition to the summary of important findings.
5. The layout of the report should be well thought out and must be appropriate and in accordance with the
objective of the research problem.
6. The reports should be free from grammatical mistakes and must be prepared strictly in accordance with
the techniques of composition of report-writing such as the use of quotations, footnotes, documentation,
proper punctuation and use of abbreviations in footnotes and the like.
7. The report must present the logical analysis of the subject matter. It must reflect a structure wherein the
different pieces of analysis relating to the research problem fit well.
8. A research report should show originality and should necessarily be an attempt to solve some intellectual
problem. It must contribute to the solution of a problem and must add to the store of knowledge.
9. Towards the end, the report must also state the policy implications relating to the problem under
consideration. It is usually considered desirable if the report makes a forecast of the probable future of the
subject concerned and indicates the kinds of research still needs to be done in that particular field.
10. Appendices should be enlisted in respect of all the technical data in the report.
11. Bibliography of sources consulted is a must for a good report and must necessarily be given.
12. Index is also considered an essential part of a good report and as such must be prepared and appended at
the end.
13. Report must be attractive in appearance, neat and clean, whether typed or printed.
14. Calculated confidence limits must be mentioned and the various constraints experienced in conducting
the research study may also be stated in the report.
15. Objective of the study, the nature of the problem, the methods employed and the analysis techniques
adopted must all be clearly stated in the beginning of the report in the form of introduction.
In spite of all that has been stated above, one should always keep in view the fact report-writing is an art
which is learnt by practice and experience, rather than by mere doctrination.

62 | P a g e
QUALITATIVE VERSUS Qualitative Research Quantitative Research
Purpose To understand & interpret social To test hypotheses, look at cause
interactions. & effect, & make predictions.
Group Studied Smaller & not randomly selected. Larger & randomly selected.
Variables Study of the whole, not variables. Specific variables studied
Type of Data Collected Words, images, or objects. Numbers and statistics.
Form of Data Collected Qualitative data such as open- Quantitative data based on precise
ended responses, interviews, measurements using structured &
participant observations, field validated data-collection
notes, & reflections. instruments.
Type of Data Analysis Identify patterns, features, Identify statistical relationships.
Objectivity and Subjectivity Subjectivity is expected. Objectivity is critical.
Role of Researcher Researcher & their biases may be Researcher & their biases are not
known to participants in the known to participants in the
study, & participant study, & participant
characteristics may be known to characteristics are deliberately
the researcher. hidden from the researcher
(double blind studies).
Results Particular or specialized findings Generalizable findings that can be
that is less generalizable. applied to other populations.
Scientific Method Exploratory or bottomup: the Confirmatory or top-down: the
researcher generates a new researcher tests the hypothesis
hypothesis and theory from the and theory with the data.
data collected.
View of Human Behavior Dynamic, situational, social, & Regular & predictable.
Most Common Research Explore, discover, & construct. Describe, explain, & predict.
Focus Wide-angle lens; examines the Narrow-angle lens; tests a specific
breadth & depth of phenomena. hypotheses.
Nature of Observation Study behavior in a natural Study behavior under controlled
environment. conditions; isolate causal effects.
Nature of Reality Multiple realities; subjective. Single reality; objective.
Final Report Narrative report with contextual Statistical report with
description & direct quotations correlations, comparisons of
from research participants. means, & statistical significance
of findings.

63 | P a g e

64 | P a g e