Sunteți pe pagina 1din 27

ASSIGNMENTS

MB 0034
RESEARCH METHODOLOGY
(3 credits)
Set I
Marks 60
1. Explain the different types of research.

Although any typology of research is inevitably arbitrary, Research may be classified crudely
according to its major intent or the methods. According to the intent, research may be classified
as:

Pure Research

It is undertaken for the sake of knowledge without any intention to apply it in practice, e.g.,
Einstein’s theory of relativity, Newton’s contributions, Galileo’s contribution, etc. It is also
known as basic or fundamental research. It is undertaken out of intellectual curiosity or
inquisitiveness. It is not necessarily problem-oriented. It aims at extension of knowledge. It may
lead to either discovery of a new theory or refinement of an existing theory. It lays foundation for
applied research. It offers solutions to many practical problems. It helps to find the critical
factors in a practical problem. It develops many alternative solutions and thus enables us to
choose the best solution.

Applied Research

It is carried on to find solution to a real-life problem requiring an action or policy decision. It is


thus problem-oriented and action-directed. It seeks an immediate and practical result, e.g.,
marketing research carried on for developing a news market or for studying the post-purchase
experience of customers. Though the immediate purpose of an applied research is to find
solutions to a practical problem, it may incidentally contribute to the development of theoretical
knowledge by leading to the discovery of new facts or testing of theory or o conceptual clarity. It
can put theory to the test. It may aid in conceptual clarification. It may integrate previously
existing theories.

Exploratory Research

It is also known as formulative research. It is preliminary study of an unfamiliar problem about


which the researcher has little or no knowledge. It is ill-structured and much less focused on pre-
determined objectives. It usually takes the form of a pilot study. The purpose of this research
may be to generate new ideas, or to increase the researcher’s familiarity with the problem or to
make a precise formulation of the problem or to gather information for clarifying concepts or to
determine whether it is feasible to attempt the study. Katz conceptualizes two levels of
exploratory studies. “At the first level is the discovery of the significant variable in the situations;
at the second, the discovery of relationships between variables.”

Descriptive Study
It is a fact-finding investigation with adequate interpretation. It is the simplest type of research. It
is more specific than an exploratory research. It aims at identifying the various characteristics of
a community or institution or problem under study and also aims at a classification of the range
of elements comprising the subject matter of study. It contributes to the development of a young
science and useful in verifying focal concepts through empirical observation. It can highlight
important methodological aspects of data collection and interpretation. The information obtained
may be useful for prediction about areas of social life outside the boundaries of the research.
They are valuable in providing facts needed for planning social action program.

Diagnostic Study

It is similar to descriptive study but with a different focus. It is directed towards discovering
what is happening, why it is happening and what can be done about. It aims at identifying the
causes of a problem and the possible solutions for it. It may also be concerned with discovering
and testing whether certain variables are associated. This type of research requires prior
knowledge of the problem, its thorough formulation, clear-cut definition of the given population,
adequate methods for collecting accurate information, precise measurement of variables,
statistical analysis and test of significance.

Evaluation Studies

It is a type of applied research. It is made for assessing the effectiveness of social or economic
programmes implemented or for assessing the impact of developmental projects on the
development of the project area. It is thus directed to assess or appraise the quality and quantity
of an activity and its performance, and to specify its attributes and conditions required for its
success. It is concerned with causal relationships and is more actively guided by hypothesis. It is
concerned also with change over time.

Action Research

It is a type of evaluation study. It is a concurrent evaluation study of an action programme


launched for solving a problem for improving an exiting situation. It includes six major steps:
diagnosis, sharing of diagnostic information, planning, developing change programme, initiation
of organizational change, implementation of participation and communication process, and post
experimental evaluation.

According to the methods of study, research may be classified as:

1. Experimental Research: It is designed to asses the effects of particular variables on a


phenomenon by keeping the other variables constant or controlled. It aims at determining
whether and in what manner variables are related to each other.
2. Analytical Study: It is a system of procedures and techniques of analysis applied to
quantitative data. It may consist of a system of mathematical models or statistical
techniques applicable to numerical data. Hence it is also known as the Statistical Method.
It aims at testing hypothesis and specifying and interpreting relationships.
3. Historical Research: It is a study of past records and other information sources with a
view to reconstructing the origin and development of an institution or a movement or a
system and discovering the trends in the past. It is descriptive in nature. It is a difficult
task; it must often depend upon inference and logical analysis or recorded data and
indirect evidences rather than upon direct observation.

4. Survey: It is a fact-finding study. It is a method of research involving collection of


data directly from a population or a sample thereof at particular time. Its purpose is to
provide information, explain phenomena, to make comparisons and concerned with cause
and effect relationships can be useful for making predications

2. Discuss the criteria of good research problem.

Horton and Hunt have given following characteristics of scientific research:

1. Verifiable evidence: That is factual observations which other observers can see and
check.
2. Accuracy: That is describing what really exists. It means truth or correctness of a
statement or describing things exactly as they are and avoiding jumping to unwarranted
conclusions either by exaggeration or fantasizing.
3. Precision: That is making it as exact as necessary, or giving exact number or
measurement. This avoids colourful literature and vague meanings.
4. Systematization: That is attempting to find all the relevant data, or collecting data in a
systematic and organized way so that the conclusions drawn are reliable. Data based on
casual recollections are generally incomplete and give unreliable judgments and
conclusions.
5. Objectivity: That is free being from all biases and vested interests. It means observation
is unaffected by the observer’s values, beliefs and preferences to the extent possible and
he is able to see and accept facts as they are, not as he might wish them to be.
6. Recording: That is jotting down complete details as quickly as possible. Since human
memory is fallible, all data collected are recorded.
7. Controlling conditions: That is controlling all variables except one and then attempting
to examine what happens when that variable is varied. This is the basic technique in all
scientific experimentation – allowing one variable to vary while holding all other
variables constant.
8. Training investigators: That is imparting necessary knowledge to investigators to make
them understand what to look for, how to interpret in and avoid inaccurate data
collection.

3.Describe the procedure used to test the hypothesis ?


To test a hypothesis means to tell (on the basis of the data researcher has collected)
whether or not the hypothesis seems to be valid. In hypothesis testing the main question is:
whether the null hypothesis or not to accept the null hypothesis? Procedure for hypothesis testing
refers to all those steps that we undertake for making a choice between the two actions i.e.,
rejection and acceptance of a null hypothesis. The various steps involved in hypothesis testing
are stated below:

Making a Formal Statement

The step consists in making a formal statement of the null hypothesis (Ho) and also of the
alternative hypothesis (Ha). This means that hypothesis should clearly state, considering the
nature of the research problem. For instance, Mr. Mohan of the Civil Engineering Department
wants to test the load bearing capacity of an old bridge which must be more than 10 tons, in that
case he can state his hypothesis as under:

Null hypothesis HO: µ =10 tons

Alternative hypothesis Ha: µ >10 tons

Take another example. The average score in an aptitude test administered at the national level is
80. To evaluate a state’s education system, the average score of 100 of the state’s students
selected on the random basis was 75. The state wants to know if there is a significance difference
between the local scores and the national scores. In such a situation the hypothesis may be state
as under:

Null hypothesis HO: µ =80

Alternative hypothesis Ha: µ ≠ 80

The formulation of hypothesis is an important step which must be accomplished with due care in
accordance with the object and nature of the problem under consideration. It also indicates
whether we should use a tailed test or a two tailed test. If Ha is of the type greater than, we use
alone tailed test, but when Ha is of the type “whether greater or smaller” then we use a two-tailed
test.

Selecting a Significant Level

The hypothesis is tested on a pre-determined level of significance and such the same should have
specified. Generally, in practice, either 5% level or 1% level is adopted for the purpose. The
factors that affect the level of significance are:

• The magnitude of the difference between sample ;

• The size of the sample;


• The variability of measurements within samples;
• Whether the hypothesis is directional or non – directional (A directional hypothesis is one
which predicts the direction of the difference between, say, means). In brief, the level of
significance must be adequate in the context of the purpose and nature of enquiry.

Deciding the Distribution to Use

After deciding the level of significance, the next step in hypothesis testing is to determine the
appropriate sampling distribution. The choice generally remains between distribution and the t
distribution. The rules for selecting the correct distribution are similar to those which we have
stated earlier in the context of estimation.

Selecting A Random Sample & Computing An Appropriate Value

Another step is to select a random sample(S) and compute an appropriate value from the sample
data concerning the test statistic utilizing the relevant distribution. In other words, draw a sample
to furnish empirical data.

Calculation of the Probability

One has then to calculate the probability that the sample result would diverge as widely as it has
from expectations, if the null hypothesis were in fact true.

Comparing the Probability

Yet another step consists in comparing the probability thus calculated with the specified value
for α, the significance level. If the calculated probability is equal to smaller than α value in case
of one tailed test (and α/2 in case of two-tailed test), then reject the null hypothesis (i.e. accept
the alternative hypothesis), but if the probability is greater then accept the null hypothesis. In
case we reject H0 we run a risk of (at most level of significance) committing an error of type I,
but if we accept H0, then we run some risk of committing error type II.

Flow Diagram for Testing Hypothesis


committing type I error committing type II error

1. Write a note on experimental design

Principles of Experimental Designs

Professor Fisher has enumerated three principles of experimental designs:

1. The principle of replication: The experiment should be reaped more than once. Thus,
each treatment is applied in many experimental units instead of one. By doing so, the
statistical accuracy of the experiments is increased. For example, suppose we are to examine
the effect of two varieties of rice. For this purpose we may divide the field into two parts and
grow one variety in one part and the other variety in the other part. We can compare the yield
of the two parts and draw conclusion on that basis. But if we are to apply the principle of
replication to this experiment, then we first divide the field into several parts, grow one
variety in half of these parts and the other variety in the remaining parts. We can collect the
data yield of the two varieties and draw conclusion by comparing the same. The result so
obtained will be more reliable in comparison to the conclusion we draw without applying the
principle of replication. The entire experiment can even be repeated several times for better
results. Consequently replication does not present any difficulty, but computationally it does.
However, it should be remembered that replication is introduced in order to increase the
precision of a study; that is to say, to increase the accuracy with which the main effects and
interactions can be estimated.
2. The principle of randomization: It provides protection, when we conduct an
experiment, against the effect of extraneous factors by randomization. In other words, this
principle indicates that we should design or plan the ‘experiment in such a way that the
variations caused by extraneous factors can all be combined under the general heading of
“chance”. For instance if we grow one variety of rice say in the first half of the parts of a
field and the other variety is grown in the other half, then it is just possible that the soil
fertility may be different in the first half in comparison to the other half. If this is so, our
results would not be realistic. In such a situation, we may assign the variety of rice to be
grown in different parts of the field on the basis of some random sampling technique i.e., we
may apply randomization principle and protect ourselves against the effects of extraneous
factors. As such, through the application of the principle of randomization, we can have a
better estimate of the experimental error.

3. Principle of local control: It is another important principle of experimental designs.


Under it the extraneous factors, the known source of variability, is made to vary deliberately
over as wide a range as necessary and this needs to be done in such a way that the variability
it causes can be measured and hence eliminated from the experimental error. This means that
we should plan the experiment in a manner that we can perform a two-way analysis of
variance, in which the total variability of the data is divided into three components attributed
to treatments, the extraneous factor and experimental error. In other words, according to the
principle of local control, we first divide the field into several homogeneous parts, known as
blocks, and then each such block is divided into parts equal to the number of treatments.
Then the treatments are randomly assigned to these parts of a block. In general, blocks are
the levels at which we hold an extraneous factors fixed, so that we can measure its
contribution to the variability of the data by means of a two-way analysis of variance. In
brief, through the principle of local control we can eliminate the variability due to extraneous
factors from the experimental error.

Important Experimental Designs

Experimental design refers to the framework or structure of an experiment and as such there are
several experimental designs. We can classify experimental designs into two broad categories,
viz., informal experimental designs and formal experimental designs. Informal experimental
designs are those designs that normally use a less sophisticated form of analysis based on
differences in magnitudes, where as formal experimental designs offer relatively more control
and use precise statistical procedures for analysis.

Informal experimental designs:

• Before and after without control design: In such a design, single test group or area is
selected and the dependent variable is measured before the introduction of the treatment.
The treatment is then introduced and the dependent variable is measured again after the
treatment has been introduced. The effect of the treatment would be equal to the level of
the phenomenon after the treatment minus the level of the phenomenon before the
treatment.
• After only with control design: In this design, two groups or areas (test and control area)
are selected and the treatment is introduced into the test area only. The dependent
variable is then measured in both the areas at the same time. Treatment impact is assessed
by subtracting the value of the dependent variable in the control area from its value in the
test area.
• Before and after with control design: In this design two areas are selected and the
dependent variable is measured in both the areas for an identical time-period before the
treatment. The treatment is then introduced into the test area only, and the dependent
variable is measured in both for an identical time-period after the introduction of the
treatment. The treatment effect is determined by subtracting the change in the dependent
variable in the control area from the change in the dependent variable in test area.

Formal Experimental Designs

1. Completely randomized design (CR design): It involves only two principle viz., the
principle of replication and randomization. It is generally used when experimental areas
happen to be homogenous. Technically, when all the variations due to uncontrolled
extraneous factors are included under the heading of chance variation, we refer to the
design of experiment as C R Design.
2. Randomized block design (RB design): It is an improvement over the C Research
design. In the RB design the principle of local control can be applied along with the other
two principles.
3. Latin square design (LS design): It is used in agricultural research. The treatments in a
LS design are so allocated among the plots that no treatment occurs more than once in
any row or column.
4. Factorial design: It is used in experiments where the effects of varying more than one
factor are to be determined. They are especially important in several economic and social
phenomena where usually a large number of factors affect a particular problem.

5. Elaborate the ways of making a case study effective. ?

Let us discuss the criteria for evaluating the adequacy of the case history or life history which is
of central importance for case study. John Dollard has proposed seven criteria for evaluating
such adequacy as follows:

i) The subject must be viewed as a specimen in a cultural series. That is, the case drawn out from
its total context for the purposes of study must be considered a member of the particular cultural
group or community. The scrutiny of the life histories of persons must be done with a view to
identify thee community values, standards and their shared way of life.

ii) The organic motto of action must be socially relevant. That is, the action of the individual
cases must be viewed as a series of reactions to social stimuli or situation. In other words, the
social meaning of behaviour must be taken into consideration.
iii) The strategic role of the family group in transmitting the culture must be recognized. That is,
in case of an individual being the member of a family, the role of family in shaping his behaviour
must never be overlooked.

iv) The specific method of elaboration of organic material onto social behaviour must be clearly
shown. That is case histories that portray in detail how basically a biological organism, the man,
gradually blossoms forth into a social person, are especially fruitful.

v) The continuous related character of experience for childhood through adulthood must be
stressed. In other words, the life history must be a configuration depicting the inter-relationships
between thee person’s various experiences.

vi) Social situation must be carefully and continuously specified as a factor. One of the important
criteria for the life history is that a person’s life must be shown as unfolding itself in the context
of and partly owing to specific social situations.

vii) The life history material itself must be organised according to some conceptual framework,
this in turn would facilitate generalizations at a higher level.

6. What is non probability sampling? Explain its types with examples. ?

Non-probability sampling or non-random sampling is not based on the theory of probability.


This sampling does not provide a chance of selection to each population element.

Advantages: The only merits of this type of sampling are simplicity, convenience and low cost.

Disadvantages: The demerits are it does not ensure a selection chance to each population unit.
The selection probability sample may not be a representative one. The selection probability is
unknown. It suffers from sampling bias which will distort results.

The reasons for usage of this sampling are when there is no other feasible alternative due to non-
availability of a list of population, when the study does
ASSIGNMENTS
MB 0034
RESEARCH METHODOLOGY
(3 credits)
Set II
Marks 60

PRASOBH.K
Roll Number: 510912640

1. What are the advantages and disadvantages of secondary data ?

Advantages of Secondary Data

Secondary sources have some advantages:

1. Secondary data, if available can be secured quickly and cheaply. Once their source of
documents and reports are located, collection of data is just matter of desk work. Even
the tediousness of copying the data from the source can now be avoided, thanks to
Xeroxing facilities.
2. Wider geographical area and longer reference period may be covered without much cost.
Thus, the use of secondary data extends the researcher’s space and time reach.
3. The use of secondary data broadens the data base from which scientific generalizations
can be made.
4. Environmental and cultural settings are required for the study.
5. The use of secondary data enables a researcher to verify the findings bases on primary
data. It readily meets the need for additional empirical support. The researcher need not
wait the time when additional primary data can be collected.

Disadvantages of Secondary Data

The use of a secondary data has its own limitations.

1. The most important limitation is the available data may not meet our specific needs. The
definitions adopted by those who collected those data may be different; units of measure
may not match; and time periods may also be different.
2. The available data may not be as accurate as desired. To assess their accuracy we need to
know how the data were collected.
3. The secondary data are not up-to-date and become obsolete when they appear in print,
because of time lag in producing them. For example, population census data are
published tow or three years later after compilation, and no new figures will be available
for another ten years.
4. Finally, information about the whereabouts of sources may not be available to all social
scientists. Even if the location of the source is known, the accessibility depends primarily
on proximity. For example, most of the unpublished official records and compilations are
located in the capital city, and they are not within the easy reach of researchers based in
far off places.
2. Explain the prerequisites and advantages of observation.?

The prerequisites of observation consist of:

• Observations must be done under conditions which will permit accurate results. The
observer must be in vantage point to see clearly the objects to be observed. The distance
and the light must be satisfactory. The mechanical devices used must be in good working
conditions and operated by skilled persons.
• Observation must cover a sufficient number of representative samples of the cases.
• Recording should be accurate and complete.
• The accuracy and completeness of recorded results must be checked. A certain number of
cases can be observed again by another observer/another set of mechanical devices, as the
case may be. If it is feasible, two separate observers and sets of instruments may be used
in all or some of the original observations. The results could then be compared to
determine their accuracy and completeness.

Advantages of observation

Observation has certain advantages:

1. The main virtue of observation is its directness: it makes it possible to study behaviour as
it occurs. The researcher need not ask people about their behaviour and interactions; he
can simply watch what they do and say.
2. Data collected by observation may describe the observed phenomena as they occur in
their natural settings. Other methods introduce elements or artificiality into the researched
situation for instance, in interview; the respondent may not behave in a natural way.
There is no such artificiality in observational studies, especially when the observed
persons are not aware of their being observed.
3. Observations is more suitable for studying subjects who are unable to articulate
meaningfully, e.g. studies of children, tribal, animals, birds etc.
4. Observations improve the opportunities for analyzing the contextual back ground of
behaviour. Further more verbal resorts can be validated and compared with behaviour
through observation. The validity of what men of position and authority say can be
verified by observing what they actually do.
5. Observations make it possible to capture the whole event as it occurs. For example only
observation can provide an insight into all the aspects of the process of negotiation
between union and management representatives.
6. Observation is less demanding of the subjects and has less biasing effect on their conduct
than questioning.
7. It is easier to conduct disguised observation studies than disguised questioning.
8. Mechanical devices may be used for recording data in order to secure more accurate data
and also of making continuous observations over longer periods.
3. Discuss the stages involved in data collection. ?

Checking for Analysis

In the data preparation step, the data are prepared in a data format, which allows the analyst to
use modern analysis software such as SAS or SPSS. The major criterion in this is to define the
data structure. A data structure is a dynamic collection of related variables and can be
conveniently represented as a graph where nodes are labelled by variables. The data structure
also defines and stages of the preliminary relationship between variables/groups that have been
pre-planned by the researcher. Most data structures can be graphically presented to give clarity
as to the frames researched hypothesis. A sample structure could be a linear structure, in which
one variable leads to the other and finally, to the resultant end variable.

The identification of the nodal points and the relationships among the nodes could sometimes be
a complex task than estimated. When the task is complex, which involves several types of
instruments being collected for the same research question, the procedures for drawing the data
structure would involve a series of steps. In several intermediate steps, the heterogeneous data
structure of the individual data sets can be harmonized to a common standard and the separate
data sets are then integrated into a single data set. However, the clear definition of such data
structures would help in the further processing of data.

Editing

The next step in the processing of data is editing of the data instruments. Editing is a process of
checking to detect and correct errors and omissions. Data editing happens at two stages, one at
the time of recording of the data and second at the time of analysis of data.

Data Editing at the Time of Recording of Data

Document editing and testing of the data at the time of data recording is done considering the
following questions in mind.

• Do the filters agree or are the data inconsistent?


• Have ‘missing values’ been set to values, which are the same for all research questions?
• Have variable descriptions been specified?
• Have labels for variable names and value labels been defined and written?

All editing and cleaning steps are documented, so that, the redefinition of variables or later
analytical modification requirements could be easily incorporated into the data sets.

Data Editing at the Time of Analysis of Data


Data editing is also a requisite before the analysis of data is carried out. This ensures that the data
is complete in all respect for subjecting them to further analysis. Some of the usual check list
questions that can be had by a researcher for editing data sets before analysis would be:

1. Is the coding frame complete?


2. Is the documentary material sufficient for the methodological description of the study?
3. Is the storage medium readable and reliable.
4. Has the correct data set been framed?
5. Is the number of cases correct?
6. Are there differences between questionnaire, coding frame and data?
7. Are there undefined and so-called “wild codes”?
8. Comparison of the first counting of the data with the original documents of the
researcher.

The editing step checks for the completeness, accuracy and uniformity of the data as created by
the researcher.

Completeness: The first step of editing is to check whether there is an answer to all the
questions/variables set out in the data set. If there were any omission, the researcher sometimes
would be able to deduce the correct answer from other related data on the same instrument. If
this is possible, the data set has to rewritten on the basis of the new information. For example,
the approximate family income can be inferred from other answers to probes such as occupation
of family members, sources of income, approximate spending and saving and borrowing habits
of family members’ etc. If the information is vital and has been found to be incomplete, then the
researcher can take the step of contacting the respondent personally again and solicit the requisite
data again. If none of these steps could be resorted to the marking of the data as “missing” must
be resorted to.

Accuracy: Apart from checking for omissions, the accuracy of each recorded answer should be
checked. A random check process can be applied to trace the errors at this step. Consistency in
response can also be checked at this step. The cross verification to a few related responses would
help in checking for consistency in responses. The reliability of the data set would heavily
depend on this step of error correction. While clear inconsistencies should be rectified in the data
sets, fact responses should be dropped from the data sets.

Uniformity: In editing data sets, another keen lookout should be for any lack of uniformity, in
interpretation of questions and instructions by the data recorders. For instance, the responses
towards a specific feeling could have been queried from a positive as well as a negative angle.
While interpreting the answers, care should be taken as a record the answer as a “positive
question” response or as “negative question” response in all uniformity checks for consistency in
coding throughout the questionnaire/interview schedule response/data set.

The final point in the editing of data set is to maintain a log of all corrections that have been
carried out at this stage. The documentation of these corrections helps the researcher to retain the
original data set.
Coding

The edited data are then subject to codification and classification. Coding process assigns
numerals or other symbols to the several responses of the data set. It is therefore a pre-requisite
to prepare a coding scheme for the data set. The recording of the data is done on the basis of this
coding scheme.

The responses collected in a data sheet varies, sometimes the responses could be the choice
among a multiple response, sometimes the response could be in terms of values and sometimes
the response could be alphanumeric. At the recording stage itself, if some codification were done
to the responses collected, it would be useful in the data analysis. When codification is done, it is
imperative to keep a log of the codes allotted to the observations. This code sheet will help in the
identification of variables/observations and the basis for such codification.

The first coding done to primary data sets are the individual observation themselves. This
responses sheet coding gives a benefit to the research, in that, the verification and editing of
recordings and further contact with respondents can be achieved without any difficulty. The
codification can be made at the time of distribution of the primary data sheets itself. The codes
can be alphanumeric to keep track of where and to whom it had been sent. For instance, if the
data consists of several public at different localities, the sheets that are distributed in a specific
locality may carry a unique part code which is alphabetic. To this alphabetic code, a numeric
code can be attached to distinguish the person to whom the primary instrument was distributed.
This also helps the researcher to keep track of who the respondents are and who are the probable
respondents from whom primary data sheets are yet to be collected. Even at a latter stage, any
specific queries on a specific responses sheet can be clarified.

The variables or observations in the primary instrument would also need codification, especially
when they are categorized. The categorization could be on a scale i.e., most preferable to not
preferable, or it could be very specific such as Gender classified as Male and Female. Certain
classifications can lead to open ended classification such as education classification, Illiterate,
Graduate, Professional, Others. Please specify. In such instances, the codification needs to be
carefully done to include all possible responses under “Others, please specify”. If the preparation
of the exhaustive list is not feasible, then it will be better to create a separate variable for the
“Others please specify” category and records all responses as such.

Numeric Coding: Coding need not necessarily be numeric. It can also be alphabetic. Coding has
to be compulsorily numeric, when the variable is subject to further parametric analysis.

Alphabetic Coding: A mere tabulation or frequency count or graphical representation of the


variable may be given in an alphabetic coding.
Zero Coding: A coding of zero has to be assigned carefully to a variable. In many instances,
when manual analysis is done, a code of 0 would imply a “no response” from the respondents.
Hence, if a value of 0 is to be given to specific responses in the data sheet, it should not lead to
the same interpretation of ‘non response’. For instance, there will be a tendency to give a code of
0 to a ‘no’, then a different coding than 0 should be given in the data sheet. An illustration of the
coding process of some of the demographic variables is given in the following table.
= Could be treated as a separate variable/observation and the actual response could be recorded.
The new variable could be termed as “other occupation”

The coding sheet needs to be prepared carefully, if the data recording is not done by the
researcher, but is outsourced to a data entry firm or individual. In order to enter the data in the
same perspective, as the researcher would like to view it, the data coding sheet is to be prepared
first and a copy of the data coding sheet should be given to the outsourcer to help in the data
entry procedure. Sometimes, the researcher might not be able to code the data from the primary
instrument itself. He may need to classify the responses and then code them. For this purpose,
classification of data is also necessary at the data entry stage.

Classification

When open ended responses have been received, classification is necessary to code the
responses. For instance, the income of the respondent could be an open-ended question. From all
responses, a suitable classification can be arrived at. A classification method should meet certain
requirements or should be guided by certain rules.

First, classification should be linked to the theory and the aim of the particular study. The
objectives of the study will determine the dimensions chosen for coding. The categorization
should meet the information required to test the hypothesis or investigate the questions.

Second, the scheme of classification should be exhaustive. That is, there must be a category for
every response. For example, the classification of martial status into three category viz.,
“married” “Single” and “divorced” is not exhaustive, because responses like “widower” or
“separated” cannot be fitted into the scheme. Here, an open ended question will be the best mode
of getting the responses. From the responses collected, the researcher can fit a meaningful and
theoretically supportive classification. The inclusion of the classification “Others” tends to fill
the cluttered, but few responses from the data sheets. But “others” categorization has to carefully
used by the researcher. However, the other categorization tends to defeat the very purpose of
classification, which is designed to distinguish between observations in terms of the properties
under study. The classification “others” will be very useful when a minority of respondents in the
data set give varying answers. For instance, the reading habits of newspaper may be surveyed.
The 95 respondents out of 100 could be easily classified into 5 large reading groups while 5
respondents could have given a unique answer. These given answer rather than being separately
considered could be clubbed under the “others” heading for meaningful interpretation of
respondents and reading habits.

Third, the categories must also be mutually exhaustive, so that each case is classified only once.
This requirement is violated when some of the categories overlap or different dimensions are
mixed up.

The number of categorization for a specific question/observation at the coding stage should be
maximum permissible since, reducing the categorization at the analysis level would be easier
than splitting an already classified group of responses. However the number of categories is
limited by the number of cases and the anticipated statistical analysis that are to be used on the
observation.

Transcription of Data

When the observations collected by the researcher are not very large, the simple inferences,
which can be drawn from the observations, can be transferred to a data sheet, which is a
summary of all responses on all observations from a research instrument. The main aim of
transition is to minimize the shuffling proceeds between several responses and several
observations. Suppose a research instrument contains 120 responses and the observations has
been collected from 200 respondents, a simple summary of one response from all 200
observations would require shuffling of 200 pages. The process is quite tedious if several
summary tables are to be prepared from the instrument. The transcription process helps in the
presentation of all responses and observations on data sheets which can help the researcher to
arrive at preliminary conclusions as to the nature of the sample collected etc. Transcription is
hence, an intermediary process between data coding and data tabulation.

Methods of Transcription

The researcher may adopt a manual or computerized transcription. Long work sheets, sorting
cards or sorting strips could be used by the researcher to manually transcript the responses. The
computerized transcription could be done using a data base package such as spreadsheets, text
files or other databases.

The main requisite for a transcription process is the preparation of the data sheets where
observations are the row of the database and the responses/variables are the columns of the data
sheet. Each variable should be given a label so that long questions can be covered under the label
names. The label names are thus the links to specific questions in the research instrument. For
instance, opinion on consumer satisfaction could be identified through a number of statements
(say 10); the data sheet does not contain the details of the statement, but gives a link to the
question in the research instrument though variable labels. In this instance the variable names
could be given as CS1, CS2, CS3, CS4, CS5, CS6, CS7, CS8, CS9 and CS10. The label CS
indicating Consumer satisfaction and the number 1 to 10 indicate the statement measuring
consumer satisfaction. Once the labelling process has been done for all the responses in the
research instrument, the transcription of the response is done.

Manual Transcription

When the sample size is manageable, the researcher need not use any computerization process to
analyze the data. The researcher could prefer a manual transcription and analysis of responses.
The choice of manual transcription would be when the number of responses in a research
instrument is very less, say 10 responses, and the numbers of observations collected are within
100. A transcription sheet with 100×50 (assuming each response has 5 options) row/column can
be easily managed by a researcher manually. If, on the other hand the variables in the research
instrument are more than 40 and each variable has 5 options, it leads to a worksheet of 100×200
sizes which might not be easily managed by the researcher manually. In the second instance, if
the number of responses is less than 30, then the manual worksheet could be attempted manually.
In all other instances, it is advisable to use a computerized transcription process.

Long Worksheets

Long worksheets require quality paper; preferably chart sheets, thick enough to last several
usages. These worksheets normally are ruled both horizontally and vertically, allowing responses
to be written in the boxes. If one sheet is not sufficient, the researcher may use multiple rules
sheets to accommodate all the observations. Heading of responses which are variable names and
their coding (options) are filled in the first two rows. The first column contains the code of
observations. For each variable, now the responses from the research instrument are then
transferred to the worksheet by ticking the specific option that the observer has chosen. If the
variable cannot be coded into categories, requisite length for recording the actual response of the
observer should be provided for in the work sheet.

The worksheet can then be used for preparing the summary tables or can be subjected to further
analysis of data. The original research instrument can be now kept aside as safe documents.
Copies of the data sheets can also be kept for future references. As has been discussed under the
editing section, the transcript data has to be subjected to a testing to ensure error free
transcription of data.

Transcription can be made as and when the edited instrument is ready for processing. Once all
schedules/questionnaires have been transcribed, the frequency tables can be constructed straight
from worksheet. Other methods of manual transcription include adoption of sorting strips or
cards.

In olden days, data entry and processing were made through mechanical and semi auto-metric
devices such as key punch using punch cards. The arrival of computers has changed the data
processing methodology altogether.

Tabulation

The transcription of data can be used to summarize and arrange the data in compact form for
further analysis. The process is called tabulation. Thus, tabulation is a process of summarizing
raw data displaying them on compact statistical tables for further analysis. It involves counting
the number of cases falling into each of the categories identified by the researcher.

Tabulation can be done manually or through the computer. The choice depends upon the size and
type of study, cost considerations, time pressures and the availability of software packages.
Manual tabulation is suitable for small and simple studies.

Manual Tabulation

When data are transcribed in a classified form as per the planned scheme of classification,
category-wise totals can be extracted from the respective columns of the work sheets. A simple
frequency table counting the number of “Yes” and “No” responses can be made easily by
counting the “Y” response column and “N” response column in the manual worksheet table
prepared earlier. This is a one-way frequency table and they are readily inferred from the totals
of each column in the work sheet. Sometimes the researcher has to cross tabulate two variables,
for instance, the age group of vehicle owners. This requires a two-way classification and cannot
be inferred straight from any technical knowledge or skill. If one wants to prepare a table
showing the distribution of respondents by age, a tally sheet showing the age groups horizontally
is prepared. Tally marks are then made for the respective group i.e., ‘vehicle owners’, from each
line of response in the worksheet. After every four tally, the fifth tally is cut across the previous
four tallies. This represents a group of five items. This arrangement facilitates easy counting of
each one of the class groups. Illustration of this tally sheet is present below.
Although manual tabulation is simple and easy to construct, it can be tedious, slow and error-
prone as responses increase.

Computerized tabulation is easy with the help of software packages. The input requirement will
be the column and row variables. The software package then computes the number of records in
each cell of three row column categories. The most popular package is the Statistical package for
Social Science (SPSS). It is an integrated set of programs suitable for analysis of social science
data. This package contains programs for a wide range of operations and analysis such as
handling missing data, recording variable information, simple descriptive analysis, cross
tabulation, multivariate analysis and non-parametric analysis.

4 .Briefly explain the types of interviews. ?

The interview may be classified into: (a) structured or directive interview, (b) unstructured or
non-directive interview, (c) focused interview, (d) clinical interview and (e) depth interview.

Structured Directive Interview

This is an interview made with a detailed standardized schedule. The same questions are put to
all the respondents and in the same order. Each question is asked in the same way in each
interview, promoting measurement reliability. This type of interview is used for large-scale
formalized surveys.

Advantages: This interview has certain advantages. First, data from one interview to the next
one are easily comparable. Second, recording and coding data do not pose any problem, and
greater precision is achieved. Lastly, attention is not diverted to extraneous, irrelevant and time
consuming conversation.

Limitation: However, this type of interview suffers from some limitations. First, it tends to lose
the spontaneity of natural conversation. Second, the way in which the interview is structured may
be such that the respondent’s views are minimized and the investigator’s own biases regarding
the problem under study are inadvertent introduced. Lastly, the scope for exploration is limited.

Unstructured or Non-Directive Interview

This is the least structured one. The interviewer encourages the respondent to talk freely about a
give topic with a minimum of prompting or guidance. In this type of interview, a detailed pre-
planned schedule is not used. Only a broad interview guide is used. The interviewer avoids
channelling the interview directions. Instead he develops a very permissive atmosphere.
Questions are not standardized and ordered in a particular way.

This interviewing is more useful in case studies rather than in surveys. It is particularly useful in
exploratory research where the lines of investigations are not clearly defined. It is also useful for
gathering information on sensitive topics such as divorce, social discrimination, class conflict,
generation gap, drug-addiction etc. It provides opportunity to explore the various aspects of the
problem in an unrestricted manner.

Advantages: This type of interview has certain special advantages. It can closely approximate
the spontaneity of a natural conversation. It is less prone to interviewer’s bias. It provides greater
opportunity to explore the problem in an unrestricted manner.

Limitations: Though the unstructured interview is a potent research instrument, it is not free
from limitations. One of its major limitations is that the data obtained from one interview is not
comparable to the data from the next. Hence, it is not suitable for surveys. Time may be wasted
in unproductive conversations. By not focusing on one or another facet of a problem, the
investigator may run the risk of being led up blind ally. As there is no particular order or
sequence in this interview, the classification of responses and coding may required more time.
This type of informal interviewing calls for greater skill than the formal survey interview.

Focused Interview

This is a semi-structured interview where the investigator attempts to focus the discussion on the
actual effects of a given experience to which the respondents have been exposed. It takes place
with the respondents known to have involved in a particular experience, e.g, seeing a particular
film, viewing a particular program on TV., involved in a train/bus accident, etc. The situation is
analysed prior to the interview. An interview guide specifying topics relating to the research
hypothesis used. The interview is focused on the subjective experiences of the respondent, i.e.,
his attitudes and emotional responses regarding the situation under study. The focused interview
permits the interviewer to obtain details of personal reactions, specific emotions and the like.

Merits: This type of interview is free from the inflexibility of formal methods, yet gives the
interview a set form and insured adequate coverage of all the relevant topics. The respondent is
asked for certain information, yet he has plenty of opportunity to present his views. The
interviewer is also free to choose the sequence of questions and determine the extent of probing,

Clinical Interview

This is similar to the focused interview but with a subtle difference. While the focused interview
is concerned with the effects of specific experience, clinical interview is concerned with broad
underlying feelings or motivations or with the course of the individual’s life experiences.

The ‘personal history’ interview used in social case work, prison administration, psychiatric
clinics and in individual life history research is the most common type of clinical interview. The
specific aspects of the individual’s life history to be covered by the interview are determined
with reference to the purpose of the study and the respondent is encouraged to talk freely about
them.

Depth Interview
This is an intensive and searching interview aiming at studying the respondent’s opinion,
emotions or convictions on the basis of an interview guide. This requires much more training on
inter-personal skills than structured interview. This deliberately aims to elicit unconscious as
well as extremely personal feelings and emotions.

This is generally a lengthy procedure designed to encourage free expression of affectively


charged information. It requires probing. The interviewer should totally avoid advising or
showing disagreement. Of course, he should use encouraging expressions like “uh-huh” or “I
see” to motivate the respondent to continue narration. Some times the interviewer has to face the
problem of affections, i.e. the respondent may hide expressing affective feelings. The interviewer
should handle such situation with great care.

5 .Describe the principles involved in the table construction.?

There are certain generally accepted principles of rules relating to construction of tables. They
are:

1. Every table should have a title. The tile should represent a succinct description of the
contents of the table. It should be clear and concise. It should be placed above the body of
the table.
2. A number facilitating easy reference should identify every table. The number can be
centred above the title. The table numbers should run in consecutive serial order.
Alternatively tables in chapter 1 be numbered as 1.1, 1.2, 1….., in chapter 2 as 2.1, 2.2,
2.3…. and so on.
3. The captions (or column headings) should be clear and brief.
4. The units of measurement under each heading must always be indicated.
5. Any explanatory footnotes concerning the table itself are placed directly beneath the table
and in order to obviate any possible confusion with the textual footnotes such reference
symbols as the asterisk (*) DAGGER (+) and the like may be used.
6. If the data in a series of tables have been obtained from different sources, it is ordinarily
advisable to indicate the specific sources in a place just below the table.
7. Usually lines separate columns from one another. Lines are always drawn at the top and
bottom of the table and below the captions.
8. The columns may be numbered to facilitate reference.
9. All column figures should be properly aligned. Decimal points and “plus” or “minus”
signs should be in perfect alignment.
10. Columns and rows that are to be compared with one another should be brought closed
together.
11. Totals of rows should be placed at the extreme right column and totals of columns at the
bottom.
12. In order to emphasize the relative significance of certain categories, different kinds of
type, spacing and identifications can be used.
13. The arrangement of the categories in a table may be chronological, geographical,
alphabetical or according to magnitude. Numerical categories are usually arranged in
descending order of magnitude.
14. Miscellaneous and exceptions items are generally placed in the last row of the table.
15. Usually the larger number of items is listed vertically. This means that a table’s length is
more than its width.
16. Abbreviations should be avoided whenever possible and ditto marks should not be used
in a table.
17. The table should be made as logical, clear, accurate and simple as possible.

Text references should identify tables by number, rather than by such expressions as “the table
above” or “the following table”. Tables should not exceed the page size by photo stating. Tables
those are too wide for the page may be turned sidewise, with the top facing the left margin or
binding of the script. Where tables should be placed in research report or thesis? Some writers
place both special purpose and general purpose tables in an appendix and refer to them in the text
by numbers. This practice has the disadvantages of inconveniencing the reader who wants to
study the tabulated data as the text is read. A more appropriate procedure is to place special
purpose tables in the text and primary tables, if needed at all, in an appendix.

6 .Write a note on contents of research report.?

The outline of a research report is given below:

I. Prefatory Items

• Title page
• Declaration
• Certificates
• Preface/ acknowledgements
• Table of contents
• List of tables
• List of graphs/ figures/ charts
• Abstract or synopsis

II. Body of the Report

• Introduction
• Theoretical background of the topic
• Statement of the problem
• Review of literature
• The scope of the study
• The objectives of the study
• Hypothesis to be tested
• Definition of the concepts
• Models if any
• Design of the study
• Methodology
• Method of data collection
• Sources of data
• Sampling plan
• Data collection instruments
• Field work
• Data processing and analysis plan
• Overview of the report
• Limitation of the study
• Results: findings and discussions
• Summary, conclusions and recommendations

III. Reference Material

• Bibliography
• Appendix
• Copies of data collection instruments
• Technical details on sampling plan
• Complex tables
• Glossary of new terms used.

S-ar putea să vă placă și