Sunteți pe pagina 1din 202

SPPY 107/SPPC 107

POSTGRADUATE COURSE
M.Sc. - Psychology /
M.Sc., Counselling Psychology

FIRST YEAR
SECOND SEMESTER

CORE PAPER - VI

RESEARCH METHODOLOGY - II

INSTITUTE OF DISTANCE EDUCATION


UNIVERSITY OF MADRAS
M.Sc. Psychology / CORE PAPER - VI
M.Sc., Counselling Psychology RESEARCH METHODOLOGY - II
FIRST YEAR - SECOND SEMESTER

WELCOME
Warm Greetings.

It is with a great pleasure to welcome you as a student of Institute of Distance


Education, University of Madras. It is a proud moment for the Institute of distance education
as you are entering into a cafeteria system of learning process as envisaged by the University
Grants Commission. Yes, we have framed and introduced Choice Based Credit
System(CBCS) in Semester pattern from the academic year 2018-19. You are free to
choose courses, as per the Regulations, to attain the target of total number of credits set
for each course and also each degree programme. What is a credit? To earn one credit in
a semester you have to spend 30 hours of learning process. Each course has a weightage
in terms of credits. Credits are assigned by taking into account of its level of subject content.
For instance, if one particular course or paper has 4 credits then you have to spend 120
hours of self-learning in a semester. You are advised to plan the strategy to devote hours of
self-study in the learning process. You will be assessed periodically by means of tests,
assignments and quizzes either in class room or laboratory or field work. In the case of PG
(UG), Continuous Internal Assessment for 20(25) percentage and End Semester University
Examination for 80 (75) percentage of the maximum score for a course / paper. The theory
paper in the end semester examination will bring out your various skills: namely basic
knowledge about subject, memory recall, application, analysis, comprehension and
descriptive writing. We will always have in mind while training you in conducting experiments,
analyzing the performance during laboratory work, and observing the outcomes to bring
out the truth from the experiment, and we measure these skills in the end semester
examination. You will be guided by well experienced faculty.

I invite you to join the CBCS in Semester System to gain rich knowledge leisurely at
your will and wish. Choose the right courses at right times so as to erect your flag of
success. We always encourage and enlighten to excel and empower. We are the cross
bearers to make you a torch bearer to have a bright future.

With best wishes from mind and heart,

DIRECTOR i/c

(i)
M.Sc. Psychology / CORE PAPER - VI
M.Sc., Counselling Psychology RESEARCH METHODOLOGY - II
FIRST YEAR - SECOND SEMESTER

COURSE WRITERS

Dr. S. Sasikala, M.Sc., M.Phil., Ph.D.


Assistant Professor,
Department of Psychology,
University of Madras,
Chennai.

EDITING & COORDINATION

Dr. S. THENMOZHI, M.A., Ph.D.,


Professor
Department of Psychology
Institute of Distance Education
University of Madras
Chennnai - 5.

Dr. S. Thenmozhi
Associate Professor
Department of Psychology
Institute of Distance Education
University of Madras
Chepauk Chennnai - 600 005.

© UNIVERSITY OF MADRAS, CHENNAI 600 005.

(ii)
M.A. DEGREE COURSE

FIRST YEAR

SECOND SEMESTER

Core Paper - VI

RESEARCH METHODOLOGY - II

SYLLABUS

Course Objective: To provide foundation on Quantitative & Qualitative research methods


in psychology, develop skills on designing quantitative & Qualitative research, develop skills
on collecting quantitative & Qualitative data using various methods, sensitize the importance
of scientific research, develop skills on proposal writing, sensitize the students on ethical
issues in research

UNIT – I Introduction to quantitative & qualitative research methods: Historical


development of quantitative & qualitative research, Defining quantitative & qualitative
research, Difference and methodological issues in quantitative & qualitative research; Ethics
in quantitative & qualitative research methods

UNIT - II Quantitative research designs: Exploratory research, survey research,


Experimental research; Research design: Meaning, purpose and principles, Simple
randomized designs, Factorial designs; Qualitative research designs: Conceptualizing
research questions, issues of paradigm, Designing samples, Theoretical sampling, N=1
design, Time series design, Mixed method research, Contrasting qualitative with quantitative
approach in research process, Issues of credibility and trustworthiness

UNIT - III Quantitative Sampling and methods of data collection: probability (VS) Non
probability methods; Determination of sample size; Qualitative method of collecting data:
What is qualitative data? Various methods of collecting qualitative data: Participant
observation, Interviewing, Focus groups, Life history and oral history, Documents, Diaries,
Photographs, Films and videos, Conversation, Texts and Case studies

(iii)
UNIT - IV Quantitative Analysis: Data analysis and report writing Parametric statistics:
One way and Two way ANOVA, Critical ratio, Student ‘t’-test, Product moment correlation,
Regression analysis; Non parametric: Mann U Whitney test, Kruskall Wallis test, Wilcoxon
Test, Freidman’s test, statistics-Chi square test, Rank order correlation

UNIT – V Qualitative Analysis: Different traditions of qualitative data analysis; thematic


analysis, Narrative analysis, Discourse analysis, Content analysis, Usage of software for
qualitative analysis; Report writing: Journal articles and thesis / dissertation writing.

Reference:

Kerlinger, N. (1996). Foundations of behavioural research. India: Prentice Hall

Gravetter,F.J.,& Forzana,L.A.B(2009). Research methods for behavioural sciences .UNITed


states :Wordsworth cengage learning .

Bordens, K.S., & Abbott, B.B. (2006). Research and design methods: A process approach
(6th ed.). New Delhi: Tata McGraw-Hill Company Limited

Goodwin, C.J. (2002). Research in psychology: Methods and design (3rd ed.). New Jersey:
John Wiley & Sons, Inc.
M.A. DEGREE COURSE

FIRST YEAR

SECOND SEMESTER

Core Paper - VI

RESEARCH METHODOLOGY - II
SCHEME OF LESSONS

Sl.No. Title Page

1. Quantitave and qualitative research 1

2. History of quantitative and qualitative research 16

3. Ethics in research 29

4. Exploratory research 51

5. Conceptualizing research questions 63

6. Issues of paradigm 76

7. Single subject design and time series design 85

8. Mixed method research 98

9. Sampling 109

10. Qualitative methods of data collection 125

11. Parametric statistics 137

12. Non-parametric statistics 147

13. Content analysis and thematic analysis 157

14. Narrative analysis and discourse analysis 169

15. Report writing for a journal and thesis/dissertation 179

(iv)
1

LESSON - 1
QUANTITAVE AND QUALITATIVE RESEARCH
INTRODUCTION

This chapter gives an idea of the research strategies namely qualitative and quantitative
research. The term quantitative refers to the research which is carried out using variables that
vary in quantity (number). The data obtained are usually in numerical form that can be scored,
analyzed, and interpreted using statistical analysis. However, there is another approach to
obtain facts or information. This alternative approach is known as qualitative research. The
chief distinction between quantitative and qualitative research is the type of data that are collected.
The data collected fall under different levels of measurement namely nominal, ordinal, ratio and
interval scale. Let us look into the details of the qualitative and quantitative research, the data
collected and the levels of measurement in detail.

OBJECTIVES OF THIS LESSON

After studying this lesson you will be able to:

 Explain the two major methods of research

 Difference between these two methods in terms of data, data collection and data
analysis

 When to use the Qualitative method and Quantitative method.

PLAN OF STUDY
1.1 Qualitative versus Quantitative

1.2 Quantitative versus Qualitative Research in Psychology

1.3 The Quantification - Qualitative methods Continuum

1.4 Qualitative versus Quantitative methods: Evaluation

1.5 When to use Quantitative Research ?

1.6 When to use Qualitative Research Method ?

1.7 Difference between Quantitative versus Qualitative approaches


2

1.8 Quantitative and Qualitative Data and Levels of Measurement

1.8.1 Quantitative Data

1.8.2 Qualitative Data

1.8.3 Measurement of Data

1.9 Ten steps for carrying out Qualitative Research (Bromley, 1986)

1.10 Varying Research Contexts

1.11 Summary

1.12 Key Words

1.13 Check your Progress

1.14 Answers to Check your Progress

1.15 Model Questions

1.1 QUALITATIVE VERSUS QUANTITATIVE

Qualitative and quantitative research cannot be considered as two extremes. Qualitative


research is the initial phase or foundation followed by quantitative methodology. Usually,
qualitative research considers only very few samples for investigation. For example, if one is
adopting a case study method for conducting a research, he or she will be considering only one
or less than 10 sample for that study. According to Glaser and Strauss (1967), qualitative data
are often coded a posteriori from interpretations of those data. On the other hand, data in
quantitative research are coded according to operational definitions which are apriori. Whether
it is qualitative or quantitative or a mixture of both type of research it should establish the truth
and must be objective in nature. This is nothing but adopting a scientific methodology or approach
which is the standard of science considered as a way of knowing. Although there are different
ways of knowing other than science only science helps us to generalize and provide or build a
theory. When we use a method like Authority, there is a question of credibility as there are some
cases of conflicting or contradicting views by different authorities. If we think of using Tenacity
or Intuition there is a question of methodology to verify the knowledge. Therefore, Science is
the only method to verify the facts or truth in objective and empirical ways. With the objective of
searching for knowledge (or “truth”) as the purpose of research, the research is considered
most effective when built on the scientific method.
3

The scientific method is inductive and deductive, objective and subjective. Validity of the
research design is better when it includes both methodologies rather than overlooking one over
another. Further, it is also not an easy task to answer whether qualitative or quantitative method
is better. The answer would be whether qualitative or quantitative it should answer the question
in an objective manner. The interactive continuum model was started in 1985. Others have
written about integrating qualitative and quantitative methods. Michael Patton (1980) presents
a diagram of what he calls “mixed paradigms” in his book, Qualitative Evaluation Methods. He
too acknowledges that, between the qualitative and quantitative paradigms, there is a continuum
of methods even though he has addressed only on qualitative methods. Creswell (1994) in his
book on Research Design: Qualitative and Quantitative Approaches, intends it to assist the
researcher in making decisions about design. However, he has given emphasis on writing a
dissertation or proposal rather than looking at the critics of the different method adopted. During
the last decade, there is a rage among people to prove one as better than the other. The
important aspect that we researchers have to look into is the quality of research work by
integrating different ways of obtaining knowledge either through qualitative or quantitative
research methods. Both together, must bring the best than focusing on one and ignoring or
disregarding the other.

1.2 QUANTITATIVE VERSUS QUALITATIVE RESEARCH IN


PSYCHOLOGY

Many researches in psychology are quantitative in nature. That is, with quantitative
research, the data are collected and presented in the form of numbers—average scores for
different groups on some task, percentages of people who do one thing or another, graphs and
tables of data, and so on. At present, many psychologists are interested to use qualitative
research which is mostly used by sociologists or anthropologists. Qualitative research cannot
be easily done with already existing tools in psychology. It includes lots of preliminary work such
as preparing a schedule or points to be collected during the interview process or during
observation or any other method of qualitative data collection methods like case study etc,
either from individuals or groups. The qualitative research methods are narrative or summaries
of information collected and not as statistical picture of information collected. Walker (1996),
who explored if sex differences in the control of a TV remote would affect relationships among
couples, used a qualitative approach. She conducted semi-structured interviews with 36 couples
who were either married or cohabiting and had been together for at least a year. Usually qualitative
research includes some form of quantification like percentage or frequency or pattern of some
4

responses. In this study too, a portion of the questions resulted in responses that could be
quantified—for instance, in response to a question about control over the remote when both
partners were watching TV, Walker determined that women had control 20% of the time, men
80% of the time. Other than this, many other descriptions underwent qualitative analysis. For
example, a narrative was prepared based on several open-ended questions in the interview,
along with quotes from the interview to illustrate conclusions. Among other things, subjects
were asked (they were interviewed individually) how they decided on programs to watch together,
what their frustrations might be during this process, and what they would like to change about
the process. Quantitative analysis takes less time to interpret compared to results in qualitative
studies which take longer time to describe. The findings of qualitative analysis need to be
substantiated with quotes that are said to represent typical responses. In the Walker study
discussed earlier she concluded that when both partners were watching TV, men usually had
control over what was being watched and that, in general, what should be a leisure activity
could be a source of stress and misunderstanding instead.

1.3 THE QUANTIFICATION–QUALITATIVE METHODS CONTINUUM

There is some research which is purely quantitative and other research which is purely
qualitative. Conceptually, research may be differentiated into two major stages either based on
data collection or based on data analysis. In the data collection phase, one should look at the
degree of quantitative information and qualitative information. For e.g., whether tools are used
and scores are arrived or collecting in depth information which needs summarization based on
the data.

If it is purely quantitative, the data need to be collected using appropriate structured tools
such as questionnaires or indexes like BP, weight, height, etc. which is collected in a controlled
setting. An example may be identifying the eating behaviour in relation to their weight and their
self-esteem. On the other hand purely qualitative research deals with data which needs
comprehension from the researcher in order to obtain relevant information from the data. For
example, observation of children with special needs through a one way mirror needs special
care from the researcher to comprehend and conclude about the behaviour of the kids. Another
example may be an interview with a professional to identify the leadership qualities will require
skills to understand from the answers given by the professional. The skills are more important
as the data may not be in a structured format. Suppose if the data is not identified to be sufficient
the researcher need to further probe or obtain information in order to interpret it.
5

In order to be more confident in the findings one can adopt both qualitative and quantitative
research methods which can be called as mixed methodology. There are few researches in
which researchers use questionnaires to study certain psychological variables such as personality,
attitude, behaviour, etc. However, they also choose to ask few open ended questions at the end
of the questionnaire in order to obtain additional information. This might help the researcher to
understand why the individual had answered in a particular way for the psychological variables
chosen in the study. However, the data analysis differs while we look at the qualitative and
quantitative research. If data have been collected only using quantitative form, then the data
have to be analysed quantitatively using appropriate statistics according to the objective of the
study. On the other hand, if the data had been collected in qualitative form then there is an
option for the researcher either to quantify the qualitative data or to use qualitative analysis.
Quantification means converting the responses given by the participants or sample into
percentage or frequency based on the repetition of words for example. This can also be done
by using coding system which helps to categorize the data obtained. If the researcher is interested
to identify the rating of different brands of a particular product which is done through interview
method, then this information can be quantified to find out which brand is given top priority.
However, few qualitative researches require qualitative analysis and that is based on the purpose
of the research. The data collected through interview or discussion can be analysed using
discourse analysis or conversation analysis. There are other forms of qualitative analysis such
as content analysis, thematic analysis, etc. When the researcher uses a mixed methodology to
conduct research, the qualitative analysis and quantitative analysis together will help the
researcher to understand or answer the research problem and each can substantiate with the
findings of the other.

Self-learning exercise
1. Give one example of qualitative and quantitative research from your experience.

2. From any book on social psychology, identify the different qualitative and quantitative
research done by psychologists.

1.4 QUALITATIVE VERSUS QUANTITATIVE METHODS: EVALUATION

Denzin and Lincoln (2000) claim that there are five major features distinguishing quantitative
from qualitative research styles:
6

1. Use of positivism and post-positivism: Quantitative and qualitative methods are


both based on positivism. However, qualitative researchers are much more willing
to accept the post-positivist position that whatever reality there is that might be
studied, our knowledge of it can only ever be approximate and never exact. In their
actions, quantitative researchers tend to reflect the view that there is a reality that
can be captured despite all of the problems. Language data would be regarded by
them as reflecting reality whereas the qualitative researcher would take the view
that language is incapable of representing reality. Quantitative researchers often
treat reality as a system of causes and effects and often appear to regard the quest
of research as being generalisable knowledge.

2. Qualitative researchers accept other features of the postmodern sensibility: This


really refers to a whole range of matters which the traditional quantitative researcher
largely avoided. The qualitative researcher is represented as having an ethic of
caring as well as political action and dialogue with participants in the research. The
qualitative researcher has a sense of personal responsibility for their actions and
activities. This refers to the emotional component of relating with the problem and
understanding it.

3. Capturing the individual’s point of view: Through the use of in-depth observation
and interviewing, the qualitative researcher believes that the remoteness of the
research from its subject matter (people) as found in some quantitative research
may be overcome.

4. Concern with the richness of description: Quite simply, qualitative researchers value
rich description almost for its own sake, whereas quantitative researchers find that
such a level of detail actually makes generalisation much more difficult.

5. Examination of the constraints of everyday life: It is argued that quantitative


researchers may fail to appreciate the characteristics of the day-to-day social world
which then become irrelevant to their findings. On the other hand, being much
more wedded in society through their style of research, qualitative researchers tend
to have their ‘feet on the ground’ more. Probably the majority of these claims would
be disputed by most quantitative researchers. For example, the belief that qualitative
research is subjective and impressionistic would suggest the lack of grounding of
qualitative research in society, not higher levels of it. The choice between quantitative
and qualitative methods when carrying out psychological research is not an easy
7

one to make. The range of considerations is enormous. Sometimes the decision


will depend as much on the particular circumstances of the research, such as the
resources available, as on profound philosophical debates about the nature of
psychological research.

1.5 WHEN TO USE QUANTITATIVE RESEARCH

The situations in which quantitative research is most appropriate include the following:

 If the research question is very clear and specific.

 When there are enough theoretical and empirical evidence based on which
hypotheses can be derived and tested for the present research.

 If the measurement instruments are available and information can be collected with
the help of that.

 If the researcher has a good knowledge of quantitative method of study when


compared to qualitative research.

 If the researcher is not confident or motivated or has interest in qualitative research.

1.6 WHEN TO USE QUALITATIVE RESEARCH METHOD

A researcher might consider using qualitative research methods in the following


circumstances:

 If the researcher is interested to understand a complex phenomenon in a natural


setting.

 If the researcher is not sure about the research problem and there is a lack of
theories

 If there is limited or no research on the topic under study.

 If the research is based on elaborate information and complexity of language.

 Has a good understanding of the qualitative research.

 When the usages of questionnaires or structured schedules are not possible for
that population or cannot be done in that situation.
8

1.7 DIFFERENCE BETWEEN QUANTITATIVE VERSUS QUALITATIVE


APPROACHES

QUANTITATIVE APPROACH QUALITATIVE APPROACH

Measure objective facts Construct social reality, cultural meaning

Focus on variables Focus on interactive processes, events

Reliability the key factor Authenticity the key factor

Value free Values present and explicit

Separate theory and data Theory and data fused

Independent of context Situationally constrained

Many cases, subjects Few cases, subjects

Statistical analysis Thematic analysis

Researcher detached Researcher involved

Sources: Crewsell (1994), Denzin and Lincoln (2003a), Guba and Lincoln (1994), Marvasti
(2004), Mostyn (1985), and Tashakkori and Teddlie (1998).

1.8 QUANTITATIVE AND QUALITATIVE DATA AND LEVELS OF


MEASUREMENT

Data are also divided into two other categories based on their characteristics. i.e. whether
they are numbers or words which are analysed. This affects the way that they are collected,
recorded and analysed. Numbers are used to record much information. This type of data is
called quantitative data. Numbers can be analysed using statistical analysis. However, a lot of
useful information cannot be reduced to numbers. People’s judgements, feelings, emotions,
ideas, beliefs, etc. can only be described in words. These record qualities rather than quantities,
hence they are called qualitative data. Words cannot be calculated mathematically, so require
quite different analytical techniques.

1.8.1 QUANTITATIVE DATA

Quantitative data can be measured, more or less accurately because it contains some
form of level, usually expressed in numbers. We can use some statistical procedure to analyse
and interpret the data. This analysis can be simple such as percentage, mean, mode, etc or
9

advanced such as regression, factor analysis etc. There are certain attributes which are purely
quantitative such as IQ, BP, weight, etc. Few attributes may seem to be qualitative such as
voting behaviour or opinion but that can be quantified if it is based on series of questions. The
data can then be treated as quantitative.

1.8.2 QUALITATIVE DATA

Qualitative data cannot be accurately measured and counted, and are generally expressed
in words rather than numbers. Any attribute such as ideas, beliefs, etc which cannot be expressed
directly or easily can be obtained through qualitative data. These kinds of data are therefore
descriptive in character. These data appear to be less valuable but are actually rich and provides
clear idea about the society. Qualitative research depends on careful definition of the meaning
of words, the development of concepts and variables, and the plotting of interrelationships
between these. Subjective phenomenon such as happiness, loyalty, trust is real even though it
is difficult to measure. Different methods such as observation, interview, literary transcripts, etc
are examples of qualitative data. Also qualitative data rely on human interpretation and evaluation
and cannot be done in a detached way. In order to make the data reliable and complete different
sources of data can be collected and can be confirmed using methods such as triangulation.
Research, particularly when doing on human beings, it’s good to combine both qualitative and
quantitative data. In fact, there are many types of data that can be seen from both perspectives.
For example, a questionnaire exploring people’s attitudes to work may provide a rich source of
qualitative data about their aspirations and beliefs, but might also provide useful quantitative
data about levels of skills and commitment. The importance need to be given to the appropriate
analysis to be carried out with the data to provide a meaningful and clear output.

In psychological research, data is the empirical representation of a concept in both


qualitative and quantitative type. Measurement is the tool used to connect data to the concept
used in the study. There are three different features that differentiate quantitative from qualitative
approach to measurement. The first difference is timing. In quantitative research, once the
variables are decided, the actions are planned well ahead before proceeding to data collection.
On the other hand, in qualitative research the measurement is done during the data collection
phase. A second difference involves the data itself. In a quantitative study, the data will be in the
form of numbers which is done by using specific data collection procedures which is based on
the objective of the study. As it is numbers it is considered to be very precise by itself. Any
abstract idea is converted to a uniform and standardized form of numbers. In a qualitative
10

study, data sometimes come in the form of numbers and mostly in the form of words, actions,
symbols etc. Unlike a quantitative study, a qualitative study does not convert all observations
into a single, common form such as numbers but takes different forms such as shapes, sizes,
etc which may not be in standard format. While numerical data convert information into a standard
and condensed format, qualitative data are voluminous, diverse and nonstandard. A third
difference involves how we connect concepts with data. In quantitative research, we contemplate
and reflect on concepts before we gather data. We select measurement techniques to bridge
the abstract concepts with the empirical data. Of course, after we collect and examine the data,
we do not shut off our minds and continue to develop new ideas, but we begin with clearly
thought-out concepts and consider how we might measure them. In qualitative research, we
also reflect on concepts before gathering data. However, many of the concepts we use are
developed and refined during or after the process of data collection. We re-examine and reflect
on the data and concepts simultaneously and interactively. As we gather data, we are
simultaneously reflecting on it and generating new ideas. The new ideas provide direction and
suggest new ways to measure. In turn, the new ways to measure shape how we will collect
additional data. In short, we bridge ideas with data in an ongoing interactive process. To
summarize, we think about and make decisions regarding measurement in quantitative studies
before we gather data. The data are in a standardized uniform format: numbers. In contrast, in
a qualitative study, most of our thinking and measurement decisions occur in the midst of
gathering data, and the data are in diffuse forms.

1.8.3 MEASUREMENT OF DATA

Data can be measured in different ways depending on their nature. These are commonly
referred to as levels of measurement – nominal, ordinal, interval and ratio.

NOMINAL LEVEL

Nominal measurement is very basic – it divides the data into separate categories that can
then be compared with each other. By sorting out the data using names or labels you can build
up a classification of types or categories. This enables you to include or exclude particular
cases into the types and also to compare them. For example, buildings may be classified into
many types, e.g. commercial, industrial, educational, religious etc. Some definitions allow only
two types, e.g. sex (male or female), while others fall into a set number such as marital status
(single, married, separated, divorced or widowed). What is important is that every category is
distinctive, that there is no overlap between them which makes it difficult to decide where to
11

place a particular piece of datum. Ideally, all the data should be able to be categorized, though
sometimes you will need a ‘remainders’ category for all those that cannot be. Nominal data can
be analysed using only simple graphic and statistical techniques. Bar graphs, for example, can
be used to compare the sizes of categories and simple statistical properties such as the
percentage relationship of one subgroup to another or of one subgroup to the total group can
be explored

ORDINAL LEVEL

This type of measurement puts the data into order with regard to a particular property that
they all share, such as size, income, strength, etc. Precise measurement of the property is not
required, only the perception of whether one is more or less than the other. For example, a
class of children can be lined up in order of size without measuring their heights; the runners in
a marathon can be sorted by the order in which they finished the race. Likewise, we can measure
members of the workforce on an ordinal scale by calling them unskilled, semi-skilled or skilled.
The ordinal scale of measurement increases the range of statistical techniques that can be
applied to the data.

INTERVAL LEVEL

With this form of measurement, the data must be measured precisely on a regular scale
of some sort, without there being a meaningful zero. For example temperature scales, in the
Fahrenheit, Celsius and Rainier scales, the gradation between each degree is equal to all the
others, but the zero point has been established arbitrarily. They each precisely measure the
temperature, but the nought degrees of each are different. Another example is the calendar
date – compare the Chinese and Western calendars. In the social sciences, some variables,
such as attitudes, are frequently measured on a scale like this:

Unfavourable –4 –3 –2 –1 0 +1 +2 +3 +4 Favourable

Despite appearances, you must be cautious to interpret this as a true interval scale, as
the numbers are not precise measurements and indicate preferences on an essentially ordinal
scale. The interval level of measurement allows yet more sophisticated statistical analysis to be
carried out.
12

RATIO LEVEL

The ratio level of measurement is the most complete level of measurement, having a true
zero: the point where the value is truly equal to nought. Most familiar concepts in physical
science are both theoretically and operationally conceptualized at a ratio level of quantification,
e.g. time, distance, velocity, mass etc. A characteristic difference between the ratio scale and all
other scales is that the ratio scale can express values in terms of multiples of fractional parts,
and the ratios are true ratios. For example, a metre is a multiple (by 100) of a centimetre
distance; a millimetre is a tenth (a fractional part) of a centimetre. The ratios are 1:100 and
1:10. There is no ambiguity in the statements ‘twice as far’, ‘twice as fast’ and ‘twice as heavy’.
Of all levels of measurement, the ratio scale is amenable to the greatest range of statistical
tests. In summary, you can use the following simple test to determine which kind of data
measurement that you can use on the values of a variable.

Self-learning exercise
1. Give one example of either qualitative or quantitative data under each level of
measurement.

1.9 TEN STEPS FOR CARRYING OUT QUALITATIVE RESEARCH


(Bromley, 1986)
 Clearly state the research issues or questions.

 Collect background information to help understand the relevant context, concepts


and theories.

 Suggest several interpretations or answers to the research problems or questions


based on this information.

 Use these to direct your search for evidence that might support or contradict these.
Change the interpretations or answers if necessary.

 Continue looking for relevant evidence. Eliminate interpretations or answers that


are contradicted, leaving one or more that are supported by the evidence.

 ‘Cross examine’ the quality and sources of the evidence to ensure accuracy and
consistency.

 Carefully check the logic and validity of the arguments leading to your conclusions.

 Select the strongest case in the event of more than one possible conclusion.
13

 If appropriate, suggest a plan of action in the light of this.

 Prepare your report as an account of your research.

These activities clearly states that there are strong links between data collection and
theory building. Ideally, the theoretical ideas should develop purely out of the data collected, the
theory being developed and refined as data collection proceeds. However, this is difficult to
achieve, as without some theoretical standpoint, it is hard to know where to start and what data
to collect! An alternative to this approach is to first devise a theory and then test it through the
analysis of data collected by field research. In this case the feedback loops for theory refinement
are not present in the process. Even so, theory testing often calls for a refinement of the theory
due to better understanding gained by the results of the analysis. There is room for research to
be pitched at different points between these extremes in the spectrum. Although it has been the
aim of many researchers to make qualitative analysis as systematic and as ‘scientific’ as possible,
there is still an element of ‘art’ in dealing with qualitative data. However, in order to convince
others of your conclusions, there must be a good argument to support them. A good argument
requires high quality evidence and sound logic. Qualitative research is practised in many
disciplines, so a range of data collection methods have been devised to cater for the varied
requirements of the different subjects, such as: qualitative interviewing, focus groups, participant,
discourse and conversation analysis and analysis of texts and documents.

1.10 VARYING RESEARCH CONTEXTS

The debate about qualitative research represents, to some extent, differences of interest
in the way psychology should be practised or applied. If you’re interested in the accuracy of
human perception in detecting colour changes, or in our ability to process incoming sensory
information at certain rates, then it seems reasonable to conduct highly controlled experimental
investigations using a strong degree of accurate quantification. If your area is psychology applied
to social work practice, awareness changes in ageing, or the experience of mourning, you are
more likely to find qualitative methods and data of greater use. But the debate also represents
fundamental disagreement over what is the most appropriate model for understanding human
behaviour and, therefore, the best way to further our understanding. A compromise position is
often found by arguing that the gathering of basically qualitative data, and its inspection and
analysis during the study can lead to the stimulation of new insights which can then be investigated
more thoroughly by quantitative methods at a later stage. This might still be considered a
basically positivist approach, however.
14

1.11 SUMMARY

This chapter dealt with the difference between qualitative and quantitative research
methods. The importance of both the methods in the discipline of psychology was emphasized.
The different ways of differentiating qualitative and quantitative research was discussed. This
qualitative and quantitative research differs in ways of collecting the data, analysing the data
and also the data as such. The two types of data namely qualitative and quantitative data were
discussed with a special mention on the levels of measurement. The guidelines or points to be
remembered to carry out either qualitative or quantitative research was also spelt out.

1.12 KEYWORDS

Quantitative research: In quantitative research, the data are collected and presented in
the form of numbers—average scores for different groups on some task, percentages of people
who do one thing or another, graphs and tables of data, and so on.

Qualitative research: The qualitative research methods are narrative or summaries of


information collected and not as statistical picture of information collected.

Quantitative data: Quantitative data can be measured, more or less accurately because it
contains some form of level, usually expressed in numbers.

Qualitative data: Qualitative data cannot be accurately measured and counted, and are
generally expressed in words rather than numbers.

1.13 CHECK YOUR PROGRESS


1. What are the differences between quantitative and qualitative approach?

2. Give examples of qualitative and quantitative data.

3. Write a method of data collection for qualitative and quantitative research each.

1.14 ANSWERS TO CHECK YOUR PROGRESS


1. Refer 1.7.

2. Qualitative data: Belief in God Quantitative: Weight

3. Qualitative data collection: Interview Quantitative: Questionnaire


15

1.15 MODEL QUESTIONS


1. Write an essay on the difference between qualitative and quantitative research?
2. Differentiate between qualitative and quantitative data.
3. Write the ten steps to carry out any qualitative research.
4. Explain qualitative and quantitative methods as a continuum.

REFERENCES
Beins, B. C., & McCarthy, M. A. (2012). Research Methods and Statistics. New Delhi,
India: Pearson Education Inc.
Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. London,
England: Routledge.
Cozby, P. C., & Bates, S. C. (2015). Methods in Behavioural Research (12th ed). New
York, NY: McGraw Hill Education.
Crano, W. D., & Brewer, M. B. (2002). Principles and Methods of Social Research. Mahwah,
NJ: Lawrence Erlbaum Associates Publishers.
Goodwin, J. C. (2010). Research in Psychology: Methods and Design (6th ed). Hoboken,
NJ: John Wiley & Sons.
Gravetter, F. J., & Forzano, L-A. B. (2012). Research Methods for the Behavioural Sciences
(4th ed). Belmont, CA: Wadsworth Cengage Learning.
Howitt, D., & Gramer, D. (2011). Introduction to Research Methods in Psychology. Harlow,
Essex: Pearson Education Inc.
Lovely Professional University. (2012). Research Methodology. Retrieved from: http://
ebooks.lpude.in/management/mba/term_2/DCOM408_DMGT404_RESEARCH_MET
HODOLOGY.pdf
Newman, I., & Ridenour, C. (1998). Qualitative-Quantitative Research Methodology:
Exploring the Interactive Continuum.Educational Leadership Faculty Publications, 122.
Newman, W. L. (2014). Social Research Methods: Qualitative and Quantitative Approaches.
New York, NY: Pearson Education Ltd.
Pandey, P. & Pandey, M. M. (2015). Research Methodology: Tools and Techniques. Buzau,
Al. Marghiloman.
Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2012). Research Methods
in Psychology (9th ed). New York, NY: McGraw Hill Education.
Walliman, N. (2011). Research Methods: The Basics. NY: Routledge
16

LESSON - 2
HISTORY OF QUANTITATIVE AND QUALITATIVE
RESEARCH
INTRODUCTION

This chapter gives an idea of the research strategies and the history of quantitative and
qualitative research. The term ‘quantitative’ refers to the research that carried out using variables
that varies in quantity (number). The data obtained are usually in numerical form which can be
scored, analyzed, and interpreted using statistical analysis. There is, however, another approach
to obtain the facts or information. The alternative is known as ‘qualitative’ research. The main
distinction between quantitative and qualitative research is the type of data they generate. In
order to understand the importance of these two types of research, one should know the history
of the development of both types of researches.

OBJECTIVES OF THIS LESSON

After studying this lesson you will be able to:

 Explain the two major methods of research

 Understand the history of the development of qualitative and quantitative research.

 Describe the measurement process in quantitative and qualitative research

PLAN OF STUDY
2.1 Qualitative and Quantitative Research

2.2 History of Qualitative and Quantitative Research

2.3 History of Qualitative-Quantitative Research in Psychology

2.4 The Measurement Process

2.4.1 Conceptualization and Operationalization

2.4.2 Quantitative Conceptualization and Operationalization

2.4.3 Qualitative Conceptualization and Operationalization.

2.4.3.1 Casing

2.5 Summary
17

2.6 Key Words

2.7 Check your Progress

2.8 Answers to Check your Progress

2.9 Model Questions

2.1 QUALITATIVE AND QUANTITATIVE RESEARCH

All behavioural research is made up of a combination of qualitative and quantitative


constructs. Qualitative methods are frequently beginning points, foundational strategies, which
often followed by quantitative methodologies. Qualitative research designs in the social sciences
stem from traditions in anthropology and sociology; where the philosophy emphasizes the
phenomenological basis of a study, the elaborate description of the “meaning” of phenomena
for the people or culture under examination.

Qualitative research methods include ethnography, case studies, field studies, grounded
theory, document studies, naturalistic inquiry, observational studies, interview studies, and
descriptive studies. During the process, notes are taken and summarized using different analysis
meant for qualitative research and the findings are interpreted accordingly.

On the other hand, Quantitative modes have been the widely used methods of research
in social sciences. Quantitative designs include experimental studies, quasi-experimental studies,
pretest-posttest designs, and others (Campbell & Stanley, 1963), where control of variables,
randomization, and valid and reliable measures are required, where generalizability from the
sample to the population is the aim.

For example, to conduct a study on adjustment problems faced by college students, a


qualitative researcher may ask a series of questions and prepares a report based on the
interpretation of the information given in the answers. On the other hand, a quantitative researcher
uses an existing tool or develop a new tool based on the objective of the study, and after
standardization, the tool will be administered to the college students. Based on the scores
obtained on different dimensions, the overall adjustment problems will be identified.

Jean Piaget developed his theory of cognitive development using his own child as a
subject using the observation method. This can be considered as an example of qualitative
research. The theory was developed based on the overall behavioural pattern of the child and
18

not based on a measure of many individuals. The difference between qualitative and quantitative
looks quite simple. But actually it is little complicate to comprehend. For example, variables
such as gender, religion, nationality, etc, hold values which are not numbers. In this case, even
though it seems to be quantitative the values are considered qualitative. However, the frequencies
of the occurrence of each subgroup of the variable is measured and provided with numbers.
For e.g., 25% of the group are males and rest are females. In this case, this is considered as
quantitative research.

‘Quantification’ means to measure on some numerical basis, if only by frequency. Whenever


we count or categorise, we quantify. Separating people according to their nationality is
quantification. In quantitative research, the raw scores are nothing but the value of each individual
in the form it is collected or after the scoring procedure. A qualitative research, on the other
hand, gives importance to words, meanings, experiences, explanations and so on. Here raw
data are the facts or information given by people as such or the report of the researcher during
observation. Qualitative data can be later quantified to some extent.

2.2 HISTORY OF QUALITATIVE – QUANTITATIVE RESEARCH

Qualitative and quantitative researches have philosophical roots in the naturalistic and
the positivistic philosophies, respectively. Virtually all qualitative researchers, regardless of their
theoretical differences, reflect some sort of individual phenomenological perspective. Most
quantitative research approaches, regardless of their theoretical differences, tend to emphasize
that there is a common reality on which people can agree. From a phenomenological perspective,
Douglas (1976) and Geertz (1973) believe that multiple realities exist and multiple interpretations
are available from different individuals that are all equally valid. Reality is a social construct. If
one functions from this perspective, how one conduct a study and what conclusions a researcher
draws from a study are considerably different from those of a researcher coming from a
quantitative or positivist position, which assumes a common objective reality across individuals.
There are different degrees of belief in these sets of assumptions about reality among qualitative
and quantitative researchers. For instance, Blumer (1980), a phenomenological researcher
who emphasizes subjectivity, does not deny that there is a reality one must attend to. The
debate between qualitative and quantitative researchers is based upon the differences in
assumptions about what reality is and whether or not it is measurable. The debate further rests
on differences of opinion about how we can best understand what we “know,” whether through
objective or subjective methods.
19

Qualitative research was first used by anthropologists and sociologists as a method of


enquiry in the early decades of the 20th century. In the 1920s and 1930s, social anthropologists
Mainowski (1920) and Mead (1935), and sociologists Park and Burgess (1925) contributed
greatly to qualitative research. The period from 1900 to 1945 is called the ‘traditional age’ of
qualitative research. During this period, qualitative data analysis aimed at a more or less objective
description of social phenomena in society or in other cultures.

Much of the literature of qualitative research and its textbooks begins in the 1960s and
1970s (Flick, 2014). The period from 1950 to 1970 is the second stage, called the ‘golden age’
of qualitative research, and has experienced modern approach (the modernist phase). In this
period, data analysis was driven by various ways of coding for materials often obtained from
participant observation.

The symbolic interaction perspective (Becker et al., 1961), the development of the grounded
theory (Glaser & Strauss, 1967), the attempt in ethnography (Garfinkel, 1967) have commenced
the ‘modern qualitative researches’ (Spradley, 1980). During 1970 to 1986, blurred genres, a
variety of new interpretive, qualitative perspectives, such as, hermeneutics, structuralism,
semiotics, phenomenology, cultural studies, and feminism have developed. In this period, the
first software programs and packages for computer-supported data analysis were also developed
(Geertz, 1973). During the period from 1986 to 1990 in the crisis of representation, the researchers
struggled with how to locate themselves, and their subjects in reflexive texts.

The ‘postmodern period’ of qualitative research started in 1990 and lasted till 1995. It is a
period of experimental and new ethnographies. During this period narratives have replaced
theories, or theories are read as narratives. The end of grand narratives is proclaimed; the
accent is shifted towards theories and narratives that fit specific, delimited, local, historical
situations, and problems (Denzin & Lincoln, 2005).

The ‘post-experimental inquiry’ is from 1995 to 2000. During this period qualitative research
linkages to democratic policies, and becomes more prominent. The methodologically contested
moment is during 2000 to 2010. It is characterized by further establishing qualitative research
through various new journals.

The ‘future period’ is from the year 2010 which confronts the methodological repercussion
associated with the evidence-based social movement. The development of qualitative research
focused on the rise of evidence-based practice as the new criterion of relevance for social
20

science, and to the new conservatism in the USA (Denzin & Lincoln, 2005). This history of
qualitative research is limited to the USA which has started in the 15th to 16th centuries under
the banner of descriptive anthropology or ethnography (Denzin & Lincoln, 2005)

Distinction between qualitative and quantitative researches:

Firestone (1987), in an article in the Educational Researcher, differentiates qualitative


from quantitative research based on four dimensions: assumptions, purpose, approach, and
research role. Firestone questions that, whether ‘assumptions’: Is objective reality sought through
facts or is reality that is socially constructed? , ‘Purpose’: Is it looking for causes or for
understanding? , for ‘approach’: he questions that, whether the research is experimental,
correlational or a form of ethnography? Lastly, related to the researcher’s role, he asks whether
the researcher is detached or immersed in the setting?.

The term ‘positivism’ further will help understand the quantitative–qualitative debate
(differentiation). Positivism is a particular epistemological position. Epistemology is the study
of, or theory of, knowledge. It is concerned with the methodology of knowledge (how we go
about knowing things) and the validation of knowledge (the value of what we learn). Prior to the
emergence of positivism during the nineteenth century, two methods of obtaining knowledge
dominated:

Theism: According to this method, knowledge was grounded in religion which enabled
one to know because truth and knowledge were revealed spiritually. Most religious texts contain
explanations and descriptions of the nature of the universe, morality and social order.

Metaphysics: This is the second method which held that knowledge was about the nature
of our being in the world and was revealed through theoretical philosophising.

Positivism was first articulated in the philosophy of Auguste Comte in the nineteenth
century in France. He said the importance of observable (and observed) facts in the valid
accumulation of knowledge. It is a small step towards scientific method in general. More
importantly in this context, positivism is at the root of scientific psychology. Positivism is applied
equally to both quantitative methods and to qualitative research methods.

Qualitative researchers tend to regard the search for the nature of reality as a less effective
pursuit. Some qualitative analysts will point to the fact that much research in psychology and
the social sciences relies on data in the form of language. Language, however, they say, is not
21

reality but just a window on reality. Furthermore, different people will give a different view of
reality. They conclude that instead of searching for reality one should just study the diversity of
what is seen through the different windows.

Every method of measuring reality is fallible, but if we use different measures and they
concur, then maybe we are moving towards our super ordinate goal. One of the reasons why
our data are problematic is that our observations are theory laden. That is, the observer comes
to the observation with biases and expectations. That baggage will include our culture, our
vested interests and our general perspective on life, for example. Psychologists also see the
world through these windows. One strategy to overcome our preconceptions is to throw our
observations before others for their critical response as part of the process of analysing our
observations. This can be done using some standards or based on some reliable measurement
which can be done using statistical analysis. This requires quantitative data. Yet some of the
most important figures in positivistic psychology such as Skinner had little or no time for statistics
and did not use them in their work. Similarly, atheoretical empiricism – virtually the collection
and analysis of data for their own sake – has nothing to do with positivism which is about
knowing the world rather than accumulating data as such.

2.3 HISTORY OF QUALITATIVE–QUANTITATIVE RESEARCH IN


PSYCHOLOGY

Laboratory experiments is one of the important methods of research. Similarly, quantitative


statistical analysis is frequently used to interpret the findings of the data collected. This is
because of the fact that experimentation and quantitative analysis are considered as the most
reliable and authentic information. The setting-up of the psychology laboratory at Leipzig
University by Wilhelm Wundt in 1879 was a crucial moment for psychologists according to
psychology’s historians (Howitt, 1992). A number of famous American psychologists were trained
at that laboratory. Wundt, however, did not believe that the laboratory was the place for all
psychological research. He stated that the laboratory may not be an appropriate place to conduct
research related to culture, for example. However, psychological laboratory was also considered
one significant and important instrument in the field of psychology.
22

2.4 THE MEASUREMENT PROCESS

2.4.1 Conceptualization and Operationalization

Conceptualization refers to the process of defining a particular abstract concept. The


definitions can be either conceptual or based on any theory. Writing a conceptual definition is
not an easy task. It comprises of understanding the various dimensions of the concept thoroughly,
reading up on the topic and also discussing it with others. A good definition has one clear,
explicit, and specific meaning with less or no ambiguity or vagueness. Conceptualisation is one
of the processes through which innovative insights regarding the concept can be developed.
Conceptualization is the process of thinking through the various possible meanings of a construct.

A single construct can have several definitions and it is not essential that people have to
agree over all the definitions. Some of the constructs are highly complex and not easy to define,
e.g., Love. There are certain constructs which are concrete and therefore, easy to define e.g.,
Gender. Before we can develop a measure on any construct, it is essential that we must define
it. Only after we have defined the construct, we can proceed to measure the construct. If we
want to measure the construct of honesty of employees working in a company, we can have a
look at their employee records, ask their co-workers and we can also additionally create a
questionnaire that assesses honesty. Measuring concepts that involve feedback from others
also involves a concept of social desirability. People can become self-conscious and may respond
to questions asked with the intention of protecting their self-image and may also engage in a
tendency to conform to the majority.

Operationalization is the process through which a conceptual definition is linked to a set


of measurement techniques. The operational definition can be measured using a questionnaire,
an observation technique, etc.

We often can measure a construct in several ways; some are better and more practical
than other ways. The key point is that we must fit the measure to the specific conceptual
definition by working with all practical constraints within which we must operate (e.g., time,
money, available participants). We can develop a new measure from scratch or use one that
other researchers are using. Operationalization connects the language of theory with the
language of empirical measures. Theory has many abstract concepts, assumptions, definitions,
and cause-and-effect relations. By contrast, empirical measures are very concrete actions in
specific, real situations with actual people and events. Measures are specific to the operations
23

or actions we engage in to indicate the presence or absence of a construct as it exists in


concrete, observable reality

2.4.2 Quantitative Conceptualization and Operationalization

Quantitative measurement involves conceptualization, operationalization, and application


of the operational definition or the collection of data. The link of abstract ideas to measurement
procedures produces precise information in the form of numbers. This is done by the rules of
correspondence or an auxiliary theory.

The purpose of the auxiliary theory is to link the conceptual definitions of constructs to
concrete operations for measuring the constructs. Rules of correspondence are logical
statements of the way an indicator corresponds to an abstract construct.

Alienation measurement: The different parts of alienation includes family relations, work
relations, relations with community, and relations with friends. An auxiliary theory may specify
that certain behaviours or feelings in each sphere of life are solid evidence of alienation. In the
sphere of work, the theory says that if a person feels a total lack of control over when, where,
and with whom he or she works, what he or she does when working, or how fast he or she must
work, that person is alienated.

Levels of definitions: Conceptual, Operational, and Empirical.

Initially the first is the most abstract level which is interested in the causal relationship
between two constructs, or a conceptual hypothesis. At the second level of operational definitions,
testing an empirical hypothesis is important to determine the degree of association between
indicators to consider correlations, statistics, questionnaires, and the like. The third level is the
empirical reality of the lived social world. The link of the operational indicators (e.g., questionnaire
items) to a construct (e.g., alienation), captures the happenings in the social world and relate it
back to the conceptual level.

The measurement of three levels together moves deductively from the abstract to the
concrete. First, conceptualizing a variable, give a clear conceptual definition; then, operationalize
it by developing an operational definition or set of indicators for it; and lastly, apply indicators to
collect data and test empirical hypotheses. Let us consider an example of teacher morale. How
do I give my teacher morale construct an operational definition? Morale is a mental state or
feeling, measured indirectly through people’s words and actions.
24

For this example,

 Read the research reports of others and see whether a good indicator already
exists.

 If there are no existing indicators, then invent one from scratch.

 Develop a questionnaire for teachers and ask them about their feelings toward the
dimensions of morale in the definition.

 Observation of the teachers in the teachers’ lounge, interacting with students, and
attending school activities also can be done in the school.

 Using school personnel records on teacher behaviours for statements that indicate
morale (e.g., absences, requests for letters of recommendation for other jobs,
performance reports).

 Survey students, school administrators, and others to find out what they think about
teacher morale.

Based on choosing of indicators, further refine and develop the conceptual definition.

Conceptualization and operationalization are necessary for each variable. In the preceding
example, morale is one variable, not a hypothesis. It could be a dependent variable caused by
something else, or it could be an independent variable causing something else. It depends on
the theoretical explanation.

2.4.3 Qualitative Conceptualization and Operationalization

In qualitative research, instead of beginning the research by framing a definition, the


process begins by a simple definition. In the qualitative process, once the simple definition is
framed, the data collection is completed and then the definition framed initially is adjusted
accordingly as the data begins to make sense. As the data makes sense, new concepts and
theme are added to refine the definition. In qualitative studies, the process of operationalization
often precedes conceptualization. The conceptual definition is created based on the initial working
ideas, observations made while gathering data. Operationalization describes how specific
observations or data are gathered and understand the data evolved into abstract constructs.
Just as quantitative, operationalization deviates from a rigid deductive process, qualitative
researchers draw ideas from beyond the data of a specific research setting. Qualitative
25

operationalization includes using pre-existing techniques and concepts that blend with those
that emerged during the data collection process.

Fantasia’s (1988) field research on contested labour actions illustrates qualitative


operationalization. Fantasia used cultures of solidarity as a central construct. He related this
construct to ideas of conflict-filled workplace relations and growing class consciousness among
non-managerial workers. He defined a culture of solidarity as a type of cultural expression
created by workers that evolves in particular places over time. The workers over time develop
shared feelings and a sense of unity that is in opposition to management and business owners.
It is an interactive process. Slowly over time, the workers arrive at common ideas, understandings,
and actions. It is “less a matter of disembodied mental attitude than a broader set of practices
and repertoires available for empirical investigation” (Fantasia, 1988, p.14). To operationalize
the construct, Fantasia describes how he gathered data. He presents them to illustrate the
construct, and explains his thinking about the data. He describes his specific actions to collect
the data (e.g. he worked in a particular factory, attended a press conference, and interviewed
people). He also shows us the data in detail (e.g., he describes specific events that document
the construct by showing several maps indicating where people stood during a confrontation
with a foreperson, retelling the sequence of events at a factory, recounting actions by
management officials, and repeating statements that individual workers made). He gives us a
look into his thinking process as he reflected and tried to understand his experiences and
developed new ideas drawing on older ideas.

2.4.3.1 Casing

In qualitative research, ideas and evidence are mutually interdependent. This applies
particularly to case study analysis. By analyzing a situation, the researcher organizes data and
applies ideas simultaneously to create or specify a case. Making or creating a case, called
casing, brings the data and theory together. Determining what to treat as a case resolves a
tension or strain between what the researcher observes and his or her ideas about it. “Casing,
viewed as a methodological step, can occur at any phase of the research process, but occurs
especially at the beginning of the project and at the end” (Ragin, 1992b:218).

2.5 SUMMARY

The history of qualitative research with special reference to the historical milestones in
quantitative and qualitative types of research in psychology was explained. Moreover, the
26

measurement process of qualitative and quantitative research differs and therefore,


conceptualization and operationalization of the constructs in these types of research where
also elaborated with suitable examples. The difference between quantitative and qualitative
research is not easy to explain. The preference depends on the psychologists to select the
choice of approaches when planning their research and, a mixed method of both these researches
are also used by many social science researchers. There is mostly an overlap or combination
of qualitative and quantitative research in order to bring relevant and meaningful information
from the study undertaken.

2.6 KEYWORDS

Qualitative research: Qualitative research involves gathering information through the


process notes are taken. They are summarized using different analysis meant for qualitative
research, and the findings are interpreted accordingly.

Quantitative research: ‘Quantification’ means to measure on some numerical basis.


Whenever we count or categorise, we quantify. In quantitative research, the raw scores are
nothing but the value of each individual in the form it is collected or after the scoring procedure.

Conceptualization: Conceptualization is the process of thinking through the various


possible meanings of a construct.

Operationalization: Operationalization is the process through which a conceptual definition


is linked to a set of measurement techniques. The operational definition can be measured
using a questionnaire, an observation technique. etc.

2.7 CHECK YOUR PROGRESS


1. Give few research methods used for qualitative and quantitative research.

2. Who all were using qualitative research methods in the earlier days?

3. Qualitative and quantitative researches have philosophical roots ____ and ____
philosophies, respectively.

4. What are the four dimensions that Firestone (1987) differentiates qualitative from
quantitative research?
27

2.8 ANSWERS TO CHECK YOUR PROGRESS


1. Qualitative: Observation, field study, interview

Quantitative: Experimental, Questionnaire

2. Anthropologists and Sociologists

3. Naturalistic and Positivistic

4. Assumptions, purpose, approach and research role.

2.9 MODEL QUESTIONS


1. Write the history of qualitative research and quantitative research.

2. State the historical development of qualitative and quantitative research in the field
of psychology.

3. Differentiate between qualitative and quantitative research based on the


measurement process involved.

4. Write the process of conceptualization and operationalization in qualitative and


quantitative research.

REFERENCES
Walliman, N. (2011). Research Methods: The Basics. NY: Routledge

Beins, B. C., & McCarthy, M. A. (2012). Research Methods and Statistics. New Delhi,
India: Pearson Education Inc.

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. London,
England: Routledge.

Cozby, P. C., & Bates, S. C. (2015). Methods in Behavioural Research (12th ed). New
York, NY: McGraw Hill Education.

Crano, W. D., & Brewer, M. B. (2002). Principles and Methods of Social Research. Mahwah,
NJ: Lawrence Erlbaum Associates Publishers.

Goodwin, J. C. (2010). Research in Psychology: Methods and Design (6th ed). Hoboken,
NJ: John Wiley & Sons.

Gravetter, F. J., & Forzano, L-A. B. (2012). Research Methods for the Behavioural Sciences
(4th ed). Belmont, CA: Wadsworth Cengage Learning.
28

Howitt, D., & Gramer, D. (2011). Introduction to Research Methods in Psychology. Harlow,
Essex: Pearson Education Inc.

Lovely Professional University. (2012). Research Methodology. Retrieved from: http://


e b o o k s . l p u d e . i n / m a n a g e m e n t / m b a / t e r m _ 2 /
DCOM408_DMGT404_RESEARCH_MET HODOLOGY.pdf

Newman, W. L. (2014). Social Research Methods: Qualitative and Quantitative Approaches.


New York, NY: Pearson Education Ltd.

Newman, I., & Ridenour, C. (1998). Qualitative-Quantitative Research Methodology:


Exploring the Interactive Continuum.Educational Leadership Faculty Publications,
122.

Pandey, P. & Pandey, M. M. (2015). Research Methodology: Tools and Techniques. Buzau,
Al. Marghiloman.

Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2012). Research Methods


in Psychology (9th ed). New York, NY: McGraw Hill Education.
29

LESSON - 3
ETHICS IN RESEARCH
INTRODUCTION

Research in psychology has lesser risks or ethical concerns unlike life sciences. However,
due to certain unethical and notorious research work which has been done earlier initiated the
development of ethical standards for psychological research in order to protect human or animal
participants. The American Psychological Association has developed a set of guidelines that
has evolved over the past half century. Many researchers in disciplines other than psychology
rely on these guidelines. As we deal with people, we need to seriously follow these guidelines.
The ethical issues that arose are the ones that psychological researchers must consider in
planning their research, even though most psychological research is ethically trouble free and
poses minimal or no risk to participants. This chapter deals with certain ethical guidelines to be
followed in psychological research.

OBJECTIVES OF THIS LESSON

After studying this lesson you will be able to:

 To understand the importance of ethics in research

 To understand the ethical code of APA

 To learn about the ethical standards to be followed by researches

PLAN OF THE STUDY


3.1 Ethics in Research

3.2 Basic Ethical Guidelines

3.2.1 Potential benefits

3.2.2 Potential costs

3.2.3 Balancing benefits and costs

3.3 Ethical Guidelines - American Psychological Association

3.4 Apa Ethics Code

3.4.1 Ethics Code: Five Principles


30

3.5 Institutional Review Board

3.6 Institutional Approval

3.7 Ethical Standards

3.7.1 Informed consent

3.7.2 Deception

3.7.3 Debriefing

3.7.4 Confidentiality

3.7.5 Invasion of privacy

3.7.6 Coercion to participate

3.7.7 Inducements to participate

3.7.8 Physical and mental stress

3.7.9 Scientific misconduct

3.7.10 Betrayal

3.7.11 Data falsification

3.8 Ethics and Plagiarism

3.9 Summary

3.10 Key Words

3.11 Check your Progress

3.12 Answers to Check your Progress

3.13 Model Questions

3.1 ETHICS IN RESEARCH

Professional competence and integrity are essential for ensuring high-quality science.
Maintaining the integrity of the scientific process is a shared responsibility of individual scientists
and the community of scientists (as represented by professional organizations such as APA).
Each individual has an ethical responsibility to seek knowledge and working towards the
betterment of the society. Diener and Crandall (1978) identify several responsibilities that follow
from this general mandate. Scientists should — carry out research in a competent manner; —
31

report results accurately; —manage research resources honestly; — fairly acknowledge, in


scientific communications, the individuals who have contributed their ideas or their time and
effort; —consider the consequences to society of any research endeavour; — speak out publicly
on societal concerns related to a scientist’s knowledge and expertise. While moving towards
facing these tasks, there are lots of questions in the mind of scientists especially regarding the
ethical standards. In discussing ethics in psychological research, the famous research of Stanley
Milgram (1963) and Philip Zimbardo (1972) are important topic to be considered. Milgram’s
research participants thought they were delivering electrical shocks to another person, to the
extent that the other person might have died. Zimbardo created a prison simulation that led
participants, all of the students, to treat one another very brutally. Even though these types of
research are very rare, it is important to understand the impact of any research on individuals.
If we are dealing with clinical research it is essential to adopt the ethical guidelines as it may
end up in any harm ranging from mild to severe.

3.2 BASIC ETHICAL GUIDELINES

Whatever their personal feelings about such matters, behavioural researchers are bound
by set of ethical guidelines. The principles are formulated by professional organizations such
as the American Psychological Association (APA). The APA’s Ethical Principles of Psychologists
and Code of Conduct (1992) sets forth ethical standards that psychologists must follow in all
areas of professional life, including therapy, evaluation, teaching, and research. To help
researchers make sound decisions regarding ethical issues, the APA has also published a set
of guidelines for research that involves human participants, as well as regulations for the use
and care of nonhuman animals in research. Also, the division of the APA for specialists in
developmental psychology has set additional standards for research involving children.
Behavioural researchers are also bound by regulations set forth by the federal government, as
well as by state and local laws. Concerned about the rights of research participants, the surgeon
general of the United States issued a directive in 1966 that required certain kinds of research to
be reviewed to ensure the welfare of human research participants. Since then, a series of
federal directives have been issued to protect the rights and welfare of the humans and other
animals who participate in research. The official approach to research ethics in both the APA
principles and federal regulations is essentially a utilitarian or pragmatic one. Rather than
specifying a rigid set of do’s and don’ts, these guidelines require that researchers weigh potential
benefits of the research against its potential costs and risks. Thus, in determining whether to
32

conduct a study, researchers must consider its likely benefits and costs. Weighing the pros and
cons of a study is called a cost-benefit analysis.

3.2.1 POTENTIAL BENEFITS

Behavioural research has five potential benefits that should be considered when a cost-
benefit analysis is conducted.

Basic Knowledge. The most obvious benefit of research is that it enhances our
understanding of behavioural processes. Of course, studies differ in the degree to which they
are expected to enhance knowledge. In a cost-benefit analysis, greater potential risks and
costs are considered permissible when the contribution of the research is expected to be high.

Improvement of Research or Assessment Techniques. Some research is conducted


to improve the procedures that researchers use to measure and study behaviour. The benefit of
such research is not to extend knowledge directly but rather to improve the research enterprise
itself. Of course, such research has an indirect effect on knowledge by providing more reliable,
valid, useful, or efficient research methods.

Practical Outcomes. Some studies provide practical benefits by directly improving the
welfare of human beings or other animals. For example, research in clinical psychology may
improve the quality of psychological assessment and treatment, studies of educational processes
may enhance learning in schools, tests of experimental drugs may lead to improved drug therapy,
and investigations of prejudice may reduce racial tensions.

Benefits for Researchers. Those who conduct research usually stand to gain from their
research activities. First, research serves an important educational function. Through conducting
research, students gain firsthand knowledge about the research process and about the topic
they are studying. Indeed, college and university students are often required to conduct research
for class projects, senior research, master’s theses, and doctoral dissertations. Experienced
scientists also benefit from research. Not only does research fulfil an educational function for
them as it does for students, but many researchers must conduct research to maintain their
jobs and advance in their careers.

Benefits for Research Participants. The people who participate in research may also
benefit from their participation. Such benefits are most obvious in clinical research in which
participants receive experimental therapies that help them with a particular problem. Research
33

participation also can serve an educational function as participants learn about behavioural
science and its methods. Finally, some studies may, in fact, be enjoyable to participants.

3.2.2 POTENTIAL COSTS

Benefits such as these must be balanced against potential risks and costs of the research.
Some of these costs are relatively minor. For example, research participants invest a certain
amount of time and effort in a study; their time and effort should not be squandered on research
that has limited value. More serious are risks to participants’ mental or physical welfare.
Sometimes, in the course of a study, participants may suffer social discomfort, threats to their
self-esteem, stress, boredom, anxiety, pain, or other aversive states. Participants may also
suffer if the confidentiality of their data is compromised and others learn about their responses.
Most serious are studies in which human and nonhuman animals are exposed to conditions
that may threaten their health or lives. Individuals will return to these kinds of costs and how
they can protect participants against them in a moment. In addition to risks and costs to the
research participants, there are other costs for research as well. Conducting research costs
money in terms of salaries, equipment, and supplies, and researchers must determine whether
their research is justified financially. In addition, some research practices may be detrimental to
the profession or to society at large. For example, the use of deception may promote a climate
of distrust toward behavioural research.

3.2.3 BALANCING BENEFITS AND COST

The issue facing the researcher, then, is whether the benefits expected from a particular
study are sufficient to warrant the expected costs. A study with only limited benefits warrants
only minimal costs and risks, whereas a study that may potentially make an important contribution
may permit greater costs. Of course, researchers themselves may not be the most objective
judges of the merits for a piece of research. For this reason, federal guidelines require that
research be approved by an Institutional Review Board (IRB) which is discussed in details later
in this lesson.

THE RISK/BENEFIT RATIO • A subjective evaluation of the risks and benefits of a research
project is used to determine whether the research should be conducted.

In addition to checking if appropriate ethical principles are being followed, an IRB considers
the risk/benefit ratio for a study. Society and individuals benefit from research when new
knowledge is gained and when treatments are identified that improve people’s lives. There are
34

also potential costs when research is not done. We miss the opportunity to gain knowledge
and, ultimately, we lose the opportunity to improve the human condition. Research can also be
costly to individual participants if they are harmed during a research study. The principal
investigator must, of course, be the first one to consider these potential costs and benefits. An
IRB is made up of knowledgeable individuals who do not have a personal interest in the research.
As such, an IRB is in a better position to determine the risk/benefit ratio and, ultimately, to
decide whether to approve the proposed research. The risk/benefit ratio asks the question “Is it
worth it?” There are no mathematical answers to the risk/benefit ratio. Instead, members of an
IRB rely on a subjective evaluation of the risks and benefits both to individual participants and
to society, and ask, are the benefits greater than the risks? When the risks outweigh the potential
benefits, then the IRB does not approve the research; when the benefits outweigh the risks, the
IRB approves the research. Many factors affect the decision regarding the proper balance of
risks and benefits of a research activity. The most basic are the nature of the risk and the
magnitude of the probable benefit to the participant as well as the potential scientific and social
value of the research (Fisher & Fryberg, 1994). Greater risk can be tolerated when clear and
immediate benefits to individuals are foreseen or when the research has obvious scientific and
social value. For instance, a research project investigating a new treatment for psychotic
behaviour may entail risk for the participants. If the proposed treatment has a good chance of
having a beneficial effect, however, then the possible benefits to both the individuals and society
could outweigh the risk involved in the study. In determining the risk/benefit ratio, researchers
also consider the quality of the research, that is, whether valid and interpretable results will be
produced. More specifically, “If because of the poor quality of the science no good can come of
a research study, how are we to justify the use of participants’ time, attention, and effort and the
money, space, supplies, and other resources that have been expended on the research project?”
(Rosenthal, 1994, p. 128). Thus, an investigator is obliged to seek to do research that meets
the highest standards of scientific excellence. When there is potential risk, a researcher must
make sure there are no alternative, low-risk procedures that could be substituted. The researcher
must also ensure that there is no previous research available that has already successfully
addressed the research question being asked. Without careful prior review of the psychological
literature, a researcher might carry out research that has already been done, thus exposing
individuals to needless risk.

Self learning exercise

1. Write a research mentioning the potential benefits and costs and calculate the risk/
benefit ratio for the same research.
35

3.3 ETHICAL GUIDELINES – AMERICAN PSYCHOLOGICAL


ASSOCIATION

The American Psychological Association (APA) had formulated a set of principles that
would guide psychologists in their work. The first set of APA’s ethical principles appeared in
1953, the most recent in 2002, with refinement in 2010. As stated in a recent version, psychologists
should incorporate the rules as an integral part of their professional lives. “The development of
a dynamic set of ethical standards for a psychologist’s work-related conduct requires a personal
commitment to a lifelong effort to act ethically” (American Psychological Association, 2002, p.
1062). When psychologists violate the ethical standards, they face possible loss of certification
to work in their field of expertise. Such offenses are relatively rare and, when they occur, generally
involve the areas of clinical and counselling psychology rather than research. Every year a
small number of psychologists suffer such action for their violations of the ethical guidelines.

3.4 APA ETHICS CODE

The American Psychological Association (APA) has provided leadership in formulating


ethical principles and standards. The Ethics Code applies to psychologists in their many roles
including teachers, researchers, and practitioners.

3.4.1 Ethics Code: Five Principles

The APA Ethics Code includes five general ethical principles: beneficence and
nonmaleficence, fidelity and responsibility, integrity, justice, and respect for rights and
responsibilities.

Beneficence and Nonmaleficence: The principle of Beneficence and Nonmaleficence


refers to the need for research to maximize benefits and minimize any possible harmful effects
of participation. The Ethics Code specifically states: “Psychologists strive to benefit those with
whom they work and take care to do no harm. In their professional actions, psychologists seek
to safeguard the welfare and rights of those with whom they interact professionally and other
affected persons and the welfare of animal subjects of research.”

Fidelity and Responsibility: The principle of Fidelity and Responsibility states:


“Psychologists establish relationships of trust with those with whom they work. They are aware
of their professional and scientific responsibilities to society and to the specific communities in
which they work.” For researchers, such trust is primarily applicable to relationships with research
36

participants. Researchers make several implicit contracts with participants during the course of
a study. For example, if participants agree to be present for a study at a specific time, the
researcher should also be there. If researchers promise to send a summary of the results to
participants, they should do so. If participants are to receive course credit for participation, the
researcher must immediately let the instructor know that the person took part in the study.
These may seem to be little details, but they are very important in maintaining trust between
participants and researchers.

Integrity: The principle of Integrity states: “Psychologists seek to promote accuracy,


honesty and truthfulness in the science, teaching and practice of psychology. In these activities
psychologists do not steal, cheat or engage in fraud, subterfuge or intentional misrepresentation
of fact.”

Justice: The principle of Justice refers to fairness and equity. This principle states:
“Psychologists recognize that fairness and justice entitle all persons to access to and benefit
from the contributions of psychology and to equal quality in the processes, procedures and
services being conducted by psychologists.”

Respect for People’s Rights and Dignity: The last of the five APA ethical principles
states: “Psychologists respect the dignity and worth of all people, and the rights of individuals to
privacy, confidentiality, and self-determination. Psychologists are aware that special safeguards
may be necessary to protect the rights and welfare of persons or communities whose
vulnerabilities impair autonomous decision making. Psychologists are aware of and respect
cultural, individual, and role differences, including those based on age, gender, gender identity,
race, ethnicity, culture, national origin, religion, sexual orientation, disability, language, and
socioeconomic status, and consider these factors when working with members of such groups.
Psychologists try to eliminate the effect on their work of biases based on those factors, and
they do not knowingly participate in or condone activities of others based upon such prejudices.”

3.5 INSTITUTIONAL REVIEW BOARD

Many years ago, decisions regarding research ethics were left to the individual investigator.
However, after several cases in which the welfare of human and nonhuman participants was
compromised (most of these cases were in medical rather than psychological research), the
U.S. government ordered all research involving human participants to be reviewed by an
Institutional Review Board at the investigator’s institution. Institutional Review Board (IRB), is a
37

committee that consists of at least five people, including a member of the community who is not
a researcher. The IRB reviews the potential risks associated with research and either approves
or disapproves projects that investigators want to carry out. The official term for this group is the
Institutional Review Board, but people often refer to it as the Human Subjects Committee. Most
research must receive approval from an IRB, but there are exceptions. These exceptions exist
because the experts who work for the government recognize that not all research carries
significant risk. For example, you are allowed to conduct some survey research and simple
observational research in a public area without IRB approval. The reason is that those you
survey or observe do not experience greater risk because you are studying them when compared
to the risks of everyday life. Survey research that probes sensitive issues may require IRB
approval.

3.6 INSTITUTIONAL APPROVAL

Much research takes place in organisations such as the police, prisons, schools and
health services. Many, if not all, of these require formal approval before the research may be
carried out in that organisation or by members of that organisation. Sometimes this authority to
permit research is the responsibility of an individual (for example, a head teacher) but, more
likely, it will be the responsibility of a committee which considers ethics. In addition, in universities
the researcher is usually required to obtain permission to carry out their research from their
school, department or an ethics committee such as an Institutional Review Board (IRB). It is
incumbent on the researcher to obtain approval for their planned research. Furthermore, the
proposal they put forward should be transparent in the sense that the information contained in
the documentation and any other communication should accurately reflect the nature of the
research. The organisation should be in a position to understand precisely what the researcher
intends on the basis of the documentation provided by the researcher and any other
communications. So, any form of deceit or sharp practice such as lies, lying by omission and
partial truths is unacceptable. Finally, the research should be carried out strictly in accordance
with the protocol for the research as laid down by the researcher in the documentation.

3.7 ETHICAL STANDARDS

PROTECTING RESEARCH SUBJECTS

The preamble to the APA Ethics Code states: “Psychologists are committed to increasing
scientific and professional knowledge of behaviour and people’s understanding of themselves
38

and others and to the use of such knowledge to improve the condition of individuals, organizations
and society.” By internalizing and adhering to ethical principles we support and nurture a healthy
science. With this in mind, we will consider the ways in which research subjects— humans and
animals—are protected in behavioural research.

We recognize that one of our goals is to promote human well-being. In addition, one of
the critical aspects of such responsibility is that the public will lose faith in the work of psychologists
and in the value of psychology if we don’t act with the highest morals. As the ethical guidelines
pertain to research, psychologists have certain responsibilities to provide research participants
with informed consent, to minimize the use of deception in research, to report research results
accurately, and to correct any errors in reporting. There are a few areas that are of special
relevance to researchers.

3.7.1 INFORMED CONSENT

One of the primary ways of ensuring that participants’ rights are protected is to obtain
their informed consent prior to participating in a study. As its name implies, informed consent
involves informing research participants of the nature of the study and obtaining their explicit
agreement to participate. Obtaining informed consent ensures that researchers do not violate
people’s privacy and that prospective research participants are given enough information about
the nature of a study to make a reasoned decision about whether they want to participate.

Using language that is reasonably understandable to participants, psychologists inform


participants of the nature of the research; they inform participants that they are free to participate
or to decline to participate or to withdraw from the research; they explain the foreseeable
consequences of declining or withdrawing; they inform participants of significant factors that
may be expected to influence their willingness to participate (such as risks, discomforts, adverse
effects, or limitations on confidentiality) ... ; and they explain other aspects about which the
prospective participants inquire. (Ethical Principles, 1992)

This principle does not require that the investigator reveal everything about the study.
Rather, researchers are required to inform participants about features of the research that
might influence their willingness to participate in it. Thus, researchers may withhold information
about the hypotheses of the study, but they cannot fail to tell participants that they will experience
pain or discomfort. Whenever researchers choose to be less than fully candid with a participant,
they are obligated to later inform the participant of all relevant details. In some cases, informed
39

consent may be given orally but only if a witness is present to attest that informed consent
occurred.

Certain classes of people are unable to give valid consent. Children, for example, are
neither cognitively nor legally able to make such informed decisions. Similarly, individuals who
are mentally retarded or who are out of touch with reality (such as psychotics) cannot be expected
to give informed consent. When one’s research calls for participants who cannot provide valid
consent, consent must be obtained from the parent or legal guardian of the participant.

3.7.2 DECEPTION

The next important ethical standard which most of the psychologists are thinking of following
is deception. The fundamental ethical position is that deception should not be used in
psychological research procedures. There are no circumstances in which deception is acceptable
if there is a reasonable expectation that physical pain or emotional distress will be caused.
However, it is recognised that there are circumstances in which the use of deception may be
justified. If the proposed research has ‘scientific, educational or applied value’ (or the prospect
of it) then deception may be considered. The next step is to establish that no effective alternative
approach is possible which does not use deception. These are not matters on which individual
psychologists should regard themselves as their own personal arbiters. If the use of deception
is the only feasible option, it is incumbent on the psychologist to explain the deception as early
as possible. This is preferably immediately after the data have been collected from each individual,
but it may be delayed until all of the data from all of the participants have been collected.

Perhaps no research practice has evoked as much controversy among behavioural


researchers as deception. Behavioural scientists use deception for a number of reasons. The
most common one is to prevent participants from learning the true purpose of a study so that
their behaviour will not be artificially affected.

• Deception in psychological research occurs when researchers withhold information


or intentionally misinform participants about the research. By its nature, deception
violates the ethical principle of informed consent.

• Deception is considered a necessary research strategy in some psychological


research.

• Deceiving individuals in order to get them to participate in the research is always


unethical.
40

• Researchers must carefully weigh the costs of deception against the potential benefits
of the research when considering the use of deception.

Deception can occur either through omission, or withholding of information, or commission,


intentionally misinforming participants about an aspect of the research. Some people argue
that research participants should never be deceived because ethical practice requires that the
relationship between experimenter and participant be open and honest (e.g., Baumrind, 1995).
A goal of research is to observe individuals’ normal behaviour. A basic assumption underlying
the use of deception is that sometimes it’s necessary to conceal the true nature of an experiment
so that participants will behave as they normally would, or act according to the instructions
provided by the experimenter.

Kelman (1972) suggests that, before using deception, a researcher must give very serious
consideration to (1) the importance of the study to our scientific knowledge, (2) the availability
of alternative, deception-free methods, and (3) the “noxiousness” of the deception. This last
consideration refers to the degree of deception involved and to the possibility of injury to the
participants. In Kelman’s view: “Only if a study is very important and no alternative methods are
available can anything more than the mildest form of deception be justified” (p. 997).

3.7.3 DEBRIEFING

Whenever deception is used, participants must be informed about the subterfuge “as
early as it is feasible” (Ethical Principles, 1992, Principle 6.15c). Usually participants are debriefed
immediately after they participate, but occasionally researchers wait until the entire study is
over and all of the data have been collected. There is a discussion between researcher and
participant to fully inform the participant about matters such as the nature of the result, the
results of the research and the conclusions of the research. The researcher should try to correct
the misconceptions of the participant that may have developed about any aspect of research.
Of course, there may be good scientific or humane reasons for withholding some information –
or delaying the main debriefing until a suitable time. For example, it may be that the research
involves two or more stages separated by a considerable interval of time. Debriefing participants
after the first stage may considerably contaminate the results at the second stage. Debriefing
cannot be guaranteed to deal effectively with the harm done to participants by deception.
Whenever a researcher recognises that a particular participant appears to have been
(inadvertently) harmed in some way by the procedures then reasonable efforts should be made
to deal with this harm. It should be remembered that researchers are not normally qualified to
41

offer counselling, and other forms of help and referral to relevant professionals may be the only
appropriate course of action. There is a body of research on the effects of debriefing (for example,
Epley & Huff, 1998; Smith & Richardson, 1983).

A good debriefing accomplishes four goals. First, the debriefing clarifies the nature of the
study for participants. Although the researcher may have withheld certain information at the
beginning of the study, the participant should be more fully informed after it is over. This does
not require that the researcher give a lecture regarding the area of research, only that the
participant leave the study with a sense of what was being studied and how his or her participation
contributed to knowledge in an area. Occasionally, participants are angered or embarrassed
when they find they were fooled by the researcher. Of course, the more smug a researcher is
about the deception, the more likely the participant is to react negatively. Thus, researchers
should be sure to explain the reasons for any deception that occurred, express their apologies
for misleading the participant, and allow the participant to express his or her feelings about
being deceived. The second goal of debriefing is to remove any stress or other negative
consequences that the study may have induced. For example, if participants were provided
with false feedback about their performance on a test, the deception should be explained. In
cases in which participants have been led to perform embarrassing or socially undesirable
actions, researchers must be sure that participants leave with no bad feelings about what they
have done. A third goal of the debriefing is for the researcher to obtain participants’ reactions to
the study itself. Often, if carefully probed, participants will reveal that they didn’t understand part
of the instructions, were suspicious about aspects of the procedure, were disturbed by the
study, or had heard about the study from other people. Such revelations may require modifications
in the procedure. The fourth goal of a debriefing is more intangible. Participants should leave
the study feeling good about their participation. Researchers should convey their genuine
appreciation for participants’ time and cooperation, and give participants the sense that their
participation was important.

3.7.4 CONFIDENTIALITY

The information obtained about research participants in the course of a study is confidential.
Confidentiality means that the data that participants provide may be used only for purposes of
the research and may not be divulged to others. When others have access to participants’ data,
their privacy is invaded and confidentiality is violated. Admittedly, in most behavioural research,
participants would experience no adverse consequences if confidentiality were broken and
42

others obtained access to their data. In some cases, however, the information collected during
a study may be quite sensitive, and disclosure would undoubtedly have negative consequences
for the participant. For example, issues of confidentiality have been paramount among health
psychologists who study persons who have tested positive for HIV or AIDS (Rosnow, Rotheram-
Borus, Ceci, Blanck, & Koocher, 1993). The easiest way to maintain confidentiality is to ensure
that participants’ responses are anonymous. Confidentiality will not be a problem if the information
that is collected cannot be used to identify the participant. The essence of anonymity is that
information provided by participants should in no way reveal their identity.

3.7.5 INVASION OF PRIVACY

The right to privacy is a person’s right to decide “when, where, to whom, and to what
extent his or her attitudes, beliefs, and behaviour will be revealed” to other people (Singleton,
Straits, Straits, & McAllister, 1988, p. 454). The APA ethical guidelines do not offer explicit
guidelines regarding invasion of privacy, noting only that “the ethical investigator will assume
responsibility for undertaking a study involving covert investigation in private situations only
after very careful consideration and consultation” (American Psychological Association, 1992,
p. 39). Thus, the circumstances under which researchers may collect data without participants’
knowledge is left to the investigator (and the investigator’s IRB) to judge. Most researchers
believe that research involving the observation of people in public places (shopping or eating,
for example) does not constitute invasion of privacy. However, if people are to be observed
under circumstances in which they reasonably expect privacy, invasion of privacy may be an
issue.

3.7.6 COERCION TO PARTICIPATE

All ethical guidelines insist that potential participants must not be coerced into participating
in research. Coercion to participate occurs when participants agree to participate because of
real or implied pressure from some individual who has authority or influence over them. The
most common example involves cases in which professors require that their students participate
in research. Other examples include employees in business and industry who are asked to
participate in research by their employers, military personnel who are required to serve as
participants, prisoners who are asked to “volunteer” for research, and clients who are asked by
their therapists or physicians to provide data. What all of these classes of participants have in
common is that they may believe, correctly or incorrectly, that refusing to participate will have
negative consequences for them-a lower course grade, putting one’s job in risk, being
43

reprimanded by one’s superiors, or simply displeasing an important person. Researchers must


respect an individual’s freedom to decline to participate in research or to discontinue participation
at any time. Furthermore, to assure that participants are not indirectly coerced by offering
exceptionally high incentives, the guidelines state that researchers cannot “offer excessive or
inappropriate financial or other inducements to obtain research participants, particularly when it
might tend to coerce participation” (Ethical Principles, 1992, Principle 6.14). Furthermore, “when
research participation is a course requirement or opportunity for extra credit, the prospective
participant is given the choice of equitable alternative activities” (Ethical principles,1992, Principle
6.lld).

3.7.7 INDUCEMENTS TO PARTICIPATE

Financial and other encouragement to participate in research are subject to the following
requirements:

· Psychologists should not offer unreasonably large monetary or other inducements


(for example, gifts) to potential participants in research. In some circumstances
such rewards can become coercive. One simply has to take the medical analogy of
offering people large amounts of money to donate organs in order to understand
the undesirability of this. While acceptable levels of inducements are not stipulated
in the ethics, one reasonable approach might be to limit payments were offered to
out-of-pocket expenses (such as travel) and a modest hourly rate for time. Of course,
even this provision is probably out of the question for student researchers.

· Sometimes professional services are offered as a way of encouraging participation


in research. These might be, for example, counselling or psychological advice of
some sort. In these circumstances, it is essential to clarify the precise nature of the
services, including possible risks, further obligations and the limitations to the
provision of such services. A further requirement, not mentioned in the APA ethics,
might be that the researcher should be competent to deliver these services. Once
again, it is difficult to imagine the circumstances in which students could be offering
such inducements.

3.7.8 PHYSICAL AND MENTAL STRESS

Most behavioural research is harmless, and the vast majority of participants are at no
physical or psychological risk. However, because many important topics in behavioural science
44

involve how people or other animals respond to unpleasant physical or psychological events,
researchers sometimes design studies to investigate the effects of unpleasant events such as
stress, failure, fear, and pain. Researchers find it difficult to study such topics if they are prevented
from exposing their participants to at least small amounts of physical or mental stress. But how
much discomfort may a researcher inflict on participants? At the extremes, most people tend to
agree regarding the amount of discomfort that is permissible. For example, most people agree
that an experiment that leads participants to think they are dying is highly unethical. One study
did just that by injecting participants, without their knowledge, with a drug that caused them to
stop breathing temporarily (Campbell, Sanderson, & Laverty, 1964). On the other hand, few
people object to studies that involve only minimal risk. Minimal risk is “risk that is no greater in
probability and severity than that ordinarily encountered in daily life or during the performance
of routine physical or psychological examinations or tests” (Official IRB Guidebook, 1986).
Between these extremes, however, considerable controversy arises regarding the amount of
physical and mental distress to be permitted in research. In large part, the final decision must
be left to individual investigators and the IRB at their institutions. The decision is often based on
a cost-benefit analysis of the research. Research procedures that cause stress or pain may be
allowed only if the potential benefits of the research are extensive and only if the participant
agrees to participate after being fully informed of the possible risks.

Quite simply, ethics are the moral principles by which we conduct ourselves. Psychological
ethics, then, are the moral principles by which psychologists conduct themselves. It is wrong to
regard ethics as being merely the rules or regulations which govern conduct. The activities of
psychologists are far too varied and complex for that. Psychological work inevitably throws up
situations which are genuinely dilemmas which no amount of rules or regulations could effectively
police. Ethical dilemmas involve conflicts between different principles of moral conduct.
Consequently, psychologists may differ in terms of their position on a particular matter. Ethical
behaviour is not the responsibility of each individual psychologist alone but a responsibility of
the entire psychological community.

3.7.9 SCIENTIFIC MISCONDUCT

In addition to principles governing the treatment of human and animal participants,


behavioural researchers are bound by general ethical principles involving the conduct of scientific
research. Such principles are not specific to behavioural research but apply to all scientists
regardless of their discipline. Most scientific organizations have set ethical standards for their
45

members to guard against scientific misconduct. The National Academy of Sciences of United
States identifies three major categories of scientific misconduct. The first category involves the
most serious and blatant forms of scientific dishonesty, such as fabrication, falsification, and
plagiarism. The APA Ethical Principles likewise addresses these issues, stating that researchers
must not fabricate data or report false results. Furthermore, if they discover significant errors in
their findings or analyses, researchers are obligated to take steps to correct such errors. Likewise,
researchers do not plagiarize others’ work, presenting “substantial portions or elements of
another’s work or data as their own ... “ (Ethical Principles, 1992, Standard 6.22). A second
category of ethical abuses involves questionable research practices that, although not constituting
scientific misconduct per se, are problematic. For example, researchers should take credit for
work only in proportion to their true contribution to it. This issue sometimes arises when
researchers must decide whom to include as authors on research articles or papers, and in
what order to list them (authors are usually listed in descending order of their scientific or
professional contributions to the project). Problems of “ownership” can occur in both directions:
In some cases, researchers have failed to properly acknowledge the contributions of other
people whereas in other cases researchers have awarded authorship to people who didn’t
contribute substantially to the project (such as a boss, or a colleague who lent them a piece of
equipment). Other ethically questionable research practices include failing to report data
inconsistent with one’s own views and failing to make one’s data available to other competent
professionals who wish to verify the researcher’s conclusions by reanalyzing the data. A third
category of ethical problems in research involves unethical behaviour that is not unique to
scientific investigation, such as sexual harassment (of research assistants or research
participants), abuse of power, discrimination, or failure to follow government regulations. Although
flaws creep into every researcher’s studies, they have an ethical obligation to design the best
studies possible under whatever circumstances they are operating.

3.7.10 BETRAYAL

The term ‘betrayal’ is usually applied to those occasions where data disclosed in confidence
are revealed publicly in such a way as to cause embarrassment, anxiety or perhaps suffering to
the subject or participant disclosing the information. It is a breach of trust, in contrast to
confidentiality, and is often a consequence of selfish motives of either a personal or professional
nature. One of the research methods that is perhaps most vulnerable to betrayal is action
research. The more people there are who can learn about the information, the more concern
there must be about privacy (Diener & Crandall, 1978). As is the case with most rights, privacy
46

can be voluntarily relinquished. Research participants may choose to give up their right to
privacy either by allowing a researcher access to sensitive topics or settings or by agreeing that
the research report may identify them by name. The latter case at least would be an occasion
where informed consent would need to be sought. Generally speaking, if researchers intend to
probe into the private aspects or affairs of individuals, their intentions should be made clear and
explicit and informed consent should be sought from those who are to be observed or scrutinized
in private contexts. Other methods to protect participants are anonymity and confidentiality and
our examination of these follows. Privacy is more than simple confidentiality. The right to privacy
means that a person has the right not to take part in the research, not to answer questions, not
to be interviewed, not to have their home intruded into, not to answer telephone calls or emails,
and to engage in private behaviour in their own private place without fear of being observed. It
is freedom from as well as freedom for. This is frequently an issue with intrusive journalism.
Hence researchers may have an obligation to inform participants of their rights to refuse to take
part in any or all of the research, to obtain permission to conduct the research to limit the time
needed for participation and to limit the observation to public behaviour.

3.7.11 DATA FALSIFICATION

If there is a mortal sin in science, it is the failure to be scrupulously honest in managing


the data, the foundation stones on which the entire enterprise is built. Thus, the integrity of data
is an issue of pivotal importance. This type of fraud can take several forms. First and most
extreme, a scientist fails to collect any data at all and simply manufactures it. Second, some of
the collected data are altered or omitted to make the overall results look better. Third, some
data are collected, but ‘‘missing’’ data are guessed at and created in a way that produces a data
set congenial to the researcher’s expectations. Fourth, an entire study is suppressed because
its results fail to come out as expected. In each of these cases, the deception is deliberate and
the scientist presumably ‘‘secures an unfair or unlawful gain’’ (e.g., publication, tenure). The
traditional view is that fraud is rare and easily detected because faked results won’t be replicated
(Hilgartner, 1990). That is, if a scientist produces a result with fraudulent data, the results won’t
represent some empirical truth. Other scientists, intrigued or surprised by this new finding, will
try to reproduce it in their own labs and will fail to do so; the fraudulent findings will then be
uncovered and eventually discarded.
47

3.8 ETHICS AND PLAGIARISM

Plagiarism is considered as extremely unethical. For example, using somebody else’s


words without attributing them to that person is unethical. Further, even if you take the ideas
from somebody else’s writing or speaking and translate those ideas into your own words, you
must attribute those ideas to the person who originated them. The issue is complicated, however.
If we cite a well-known fact (e.g., humans are born without the ability to use language but learn
to speak the language to which they are exposed), we don’t need to provide a citation. You can
assume that everybody knows that your statement is true. But if we are citing information that
is not widely known (e.g., Ivan Pavlov coined the term classical conditioning), we should cite a
trustworthy source to document our statement. Professionals urge caution and recommend
citing a source if it is likely that readers will not be familiar with the topic about which one is
writing (Avoiding plagiarism, 2009). One further issue involves self-plagiarism, which is the use
of our own work multiple times. So, if we have published a paper, as a general rule, we could
not ethically use the same material in a second publication. The issue of self-plagiarism is
relevant to students who do not publish their work because some sources (e.g., Avoiding
plagiarism, 2009) assert that students cannot submit the same document to another person for
some other purpose.

Self learning exercise

1. Write a research proposal seeking for permission from Institutional Review Board by
meeting all the ethical standards.

3.9 SUMMARY

Researchers who do not consider the ethical implications of their research risk harming
individuals, communities, and behavioural science. Ethical issues must be considered whenever
a study is designed. Usually the ethical issues are minor ones, but sometimes the fundamental
conflict between the scientific search for knowledge and the welfare of research participants
creates an ethical dilemma. Important issues that must be considered when human participants
are used in research were discussed which includes informed consent, invasion of privacy,
coercion to participate, deception, confidentiality, etc. Although APA and federal guidelines provide
general guidance regarding these issues, individual researchers must weigh the potential benefits
of their research against its potential costs. Scientific misconduct involves behaviours that
compromise the integrity of the scientific enterprise, including dishonesty (fabrication, falsification,
48

and plagiarism), questionable research practices, and otherwise unethical behaviour (such as
sexual harassment and misuse of power). Therefore, by understanding the importance of ethical
principles to be followed we need to adopt it while conducting research.

3.10 KEY WORDS

APA: American Psychological Association

Risk/benefit ratio: A subjective evaluation of the risks and benefits of a research project
is used to determine whether the research should be conducted.

Informed consent: Informed consent involves informing research participants of the nature
of the study and obtaining their explicit agreement to participate.

Deception: Deception is withholding information or intentionally misinform participants


about the research.

Right to privacy: The right to privacy is a person’s right to decide when, where, to whom,
and to what extent his or her attitudes, beliefs, and behaviour will be revealed to others.

Data falsification: Failure to be scrupulously honest in managing the data.

Scientific misconduct: Most serious and blatant forms of scientific dishonesty, such as
fabrication, falsification, and plagiarism

3.11 CHECK YOUR PROGRESS


1. What are the responsibilities of scientists?

2. Expand APA and IRB

3. What are the five potential benefits in behavioural research?

4. What is risk/benefit ratio?

5. What are the five principles of APA ethics code?

6. Spell out the various ethical standards to be followed?

7. Weighing the pros and cons of a study is called a _____.

8. __________ and ___________ are essential for ensuring high-quality science.


49

3.12 ANSWERS TO CHECK YOUR PROGRESS


1. Scientists should — carry out research in a competent manner; —report results
accurately; —manage research resources honestly; — fairly acknowledge, in
scientific communications, the individuals who have contributed their ideas or their
time and effort; —consider the consequences to society of any research endeavour;
— speak out publicly on societal concerns related to a scientist’s knowledge and
expertise.

2. APA: American Psychological Association, IRB: Institutional Review Board

3. Basic knowledge, improvement of research or assessment techniques, practical


outcomes, benefits for researchers and benefits for research participants

4. A subjective evaluation of the risks and benefits of a research project is used to


determine whether the research should be conducted.

5. Beneficence and nonmaleficence, fidelity and responsibility, integrity, justice, and


respect for rights and responsibilities.

6. Informed consent, deception, debriefing, confidentiality, invasion of privacy, coercion


to participate, inducements to participate, physical and mental stress, scientific
misconduct, betrayal and data falsification.

7. Cost-benefit analysis

8. Professional competence and Integrity

3.13 MODEL QUESTIONS


1. Write in detail the ethical principles of APA ethics code?

2. Explain the ethical standards to be followed in behavioural research?

3. Discuss about the cost benefit analysis in behavioural research.

REFERENCES
American Psychological Association. (2010). Publication manual of the American
Psychological Association. Washington, DC: American Psychological Association.

Beins, B. C., & McCarthy, M. A. (2012). Research Methods and Statistics. New Delhi,
India: Pearson Education Inc.
50

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. London,
England: Routledge. COOLICAN
Coolican, H. (1994). Research Methods and Statistics in Psychology. London: Hodder &
Stoughton.
Cozby, P. C., & Bates, S. C. (2015). Methods in Behavioural Research (12th ed). New
York, NY: McGraw Hill Education.
Crano, W. D., & Brewer, M. B. (2002). Principles and Methods of Social Research. Mahwah,
NJ: Lawrence Erlbaum Associates Publishers.
deMarris & S.D. Lapan (Eds.), Foundations for Research Methods of Inquiry in Education
and the Social Sciences (103-122). NJ: Lawrence Erlbaum Associates, Inc., Publisher
Goodwin, J. C. (2010). Research in Psychology: Methods and Design. (6th ed). Hoboken,
NJ: John Wiley & Sons.
Gravetter, F. J., & Forzano, L-A. B. (2012). Research Methods for the Behavioural Sciences
(4th ed). Belmont, CA: Wadsworth Cengage Learning.
Howitt, D., & Gramer, D. (2011). Introduction to Research Methods in Psychology. Harlow,
Essex: Pearson Education Inc.
Jackson, S.L. (2009). Research Methods and Statistics: A critical thinking
approach.Belmont, CA: Wadsworth Cengage Learning.
Leary, M. R. (2001). Introduction to Behavioural Research (3rd ed). Needham Heights,
MA: Allyn and Bacon.
Lovely Professional University. (2012). Research Methodology. Retrieved from: http://
ebooks.lpude.in/management/mba/term_2/DCOM408_DMGT404_RESEARCH_MET
HODOLOGY.pdf. Retrieved from: https://research methodology.net/research-methodology/
research-design/exploratory-research
Neuman, W. L. (2014). Social Research Methods: Qualitative and Quantitative Approaches.
New York, NY: Pearson Education Ltd.
Rugg, G., & Petre, M. (2007). A Gentle Guide to Research Methods. NY: McGraw Hill
Education.
Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2012). Research Methods
in Psychology (9th ed). New York, NY: McGraw Hill Education.
Walliman, N. (2011). Research Methods: The Basics. Oxon: Routledge
51

LESSON - 4
EXPLORATORY RESEARCH
INTRODUCTION

In Research Methodology I we discussed about framing hypothesis in order to execute


the research problem that was raised by the researcher. Hypothesis is a tentative assumption
which is framed based on earlier studies as a starting point for an investigation aiming to either
prove or disprove it. It serves as a direction for the researcher to move further. However, when
the research question raised is new or has not been studied clearly, then in order to merely
explore the research question, exploratory research is used. This type of research may not
have a conclusive result, however, this will help one to have a clarity of whether they have to
further proceed towards that research question or not. This type of research provides insights
to the researcher. There are other types of research (survey research, Experimental research)
and research designs (Simple randomized designs, Factorial designs) which we have already
discussed in Research Methodology I with its meaning, purpose and principles. Let us look into
the detailed understanding of exploratory research in this lesson.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand the meaning of exploratory research

 To explain where and when exploratory research can be done

 To understand the data sources and how to carry out exploratory research

PLAN OF THE STUDY


4.1 Exploratory Research

4.2 Differences between Exploratory and Conclusive Research

4.3 Purpose of Exploratory Research

4.4 Importance of Exploratory Research Design

4.5 Characteristics of Exploratory Research

4.6 Hypothesis Development in Exploratory Research

4.6.1 Formulation of Hypothesis


52

4.7 Qualitative Research and Exploratory Research

4.8 Causality and Exploratory Research

4.9 Advantages of Exploratory Research

4.10 Disadvantages of Exploratory Research

4.11 Summary

4.12 Key Words

4.13 Check Your Progress

4.14 Answers to Check your Progress

4.15 Model Questions

4.1 EXPLORATORY RESEARCH

Katz (1953) has divided field studies into two board types – exploratory and hypothesis
testing. The exploratory types seek what is, rather than predict relations to be found. Three
main purposes of exploratory research are : (1) to discover significant variables in the field
situation, (2) to discover relations among variables (3) to lay a ground work for later, more
systematic and rigorous testing of hypothesis. This type of research helps researcher to carry
out preliminary work before testing hypothesis. This type of research also helps to discover or
uncover the unexplored problem which is crucial in the developing countries. For example,
latest advancement in gadgets and its relation to effect on physical and psychological health of
individuals.

Exploratory research helps researchers to gain insights and proceed towards further
research work. Most of the exploratory researches are carried out using the qualitative method.
This type of research is done before formulating the hypotheses and therefore uses either semi
structured or unstructured method to collect data and allows the researcher to be little or more
flexible in the way it is carried out. The researcher needs to be more open-minded towards the
findings as he or she may meet with any kind of findings either in line with or different from what
he was looking for.

The broad or ambiguous research problem is reduced into a small and specific problem
or sub-problems which can be used to formulate the hypothesis. For example, if one need to
study the factors of online shopping. The researcher needs to first identify the factors based on
53

interviews or discussions with people who are online shoppers and then use that findings to
understand the larger sample.

Exploratory research, as the name implies, intends merely to explore the research question
and does not intend to offer final and conclusive solutions to existing problems. This type of
research is usually conducted to study a problem that has not been clearly defined yet.

4.2 DIFFERENCES BETWEEN EXPLORATORY AND CONCLUSIVE


RESEARCH

The difference between exploratory and conclusive research is drawn by Sandhursen


(2000) in a way that exploratory studies result in a range of causes and alternative options for
a solution of a specific problem, whereas, conclusive studies identify the final information that is
the only solution to an existing research problem.

In other words, exploratory research design simply explores the research questions, leaving
room for further researches, whereas conclusive research design is aimed to providing final
findings for the research.

Moreover, it has been stated that “an exploratory study may not have a rigorous
methodology as used in conclusive studies, and sample sizes may be smaller. But it helps to do
the exploratory study as methodically as possible, if it is going to be used for major decisions
about the way we are going to conduct our next study” (Nargundkar, 2003, p.41).

4.3 PURPOSE OF EXPLORATORY RESEARCH

The primary objective of exploratory research design is that of formulating a problem


for more precise investigation or of developing the working hypothesis from an operational
perspective. The main focus in these researches is on the discovery of ideas and insights.

Exploratory researches are used:

1. To gain an insight into the problem


2. To generate new ideas
3. To develop hypothesis
4. Exploratory studies may be used to clarify concepts
5. It helps in formulating precise problems
6. To pre-test a draft questionnaire
54

4.4 IMPORTANCE OF EXPLORATORY RESEARCH DESIGN

Importance of Exploratory Research are in the following areas:

1. Lack of resources: When the researcher doesn’t have resources and capability to
test the hypothesis, he is in a position to discover facts through exploratory design
that is appropriate to or in line with the hypothesis.

2. Study of critical issues: This design helps one to focus on critical issues. When the
problems are identified, the researcher starts executing it in relation to the significance
of the society.

3. Knowing the unknown: One can start any research with either theory or
hypothesis. They lay proper foundation to move in a proper direction. In order
to formulate hypothesis, preliminary work is carried out using exploratory
research.

4. Ambiguousness to clarity: This process on one hand focuses the attention of the
researcher on the problem and, on the other, it assists him to gather facts on scientific
lines to ensure that research may be completed correctly.

4.5 CHARACTERISTICS OF EXPLORATORY RESEARCH


1. Exploratory research is flexible and very versatile.

2. For data collection semi-structured or unstructured questionnaire or schedules can


be used.

3. This type of research allows very wide exploration of views.

4. Research is qualitative in nature and it is also open ended.

4.6 HYPOTHESIS DEVELOPMENT IN EXPLORATORY RESEARCH


1. Sometimes, as no previous data is available, we may not develop the hypothesis at
all.

2. In other cases, if few or more information is available and it may be possible to


provide answers to the problem which may be unclear then hypothesis is formulated.

4.6.1 Formulation of Hypothesis

The quickest and the cheapest way to formulate a hypothesis in exploratory research are
by using any of the five methods:
55

1. Literature Search: This refers to “referring to a literature to develop a new hypothesis”.


The literature referred are – professional journals, trade journals, market research finding
publications, statistical publications etc.

2. Experience Survey: In experience surveys, it is desirable to talk to persons who are


well informed in the area being investigated. Here, no questionnaire is required. The approach
adopted in an experience survey should be highly unstructured, so that the respondent can
give divergent views.

3. Focus Group: Another widely used technique in exploratory research is the focus
group. In a focus group, a small number of individuals are brought together to study and talk
about some topic of interest. The discussion is co-ordinated by a moderator. The group usually
is of 8-12 persons. While selecting these persons, care has to be taken to see that they should
have a common background and have similar experiences. This is required because there
should not be a conflict among the group members on the common issues that are being
discussed. Normally, a number of such groups are constituted and the final conclusion of various
groups are taken for formulating the hypothesis. Therefore a key factor in focus group is to
have similar groups. The guiding criteria is to see whether the latter groups are generating
additional ideas or repeating the same with respect to the subject under study. When this
shows a diminishing return from the group, the discussions stopped. The typical focus group
lasts for 1-30 hours to 2 hours. The moderator under the focus group has a key role. His job is
to guide the group to proceed in the right direction.

The following should be the characteristics of a moderator/facilitator:

(a) Listening: He must have a good listening ability. The moderator must not miss the
participant’s comment due to lack of attention.

(b) Permissive: The moderator must be permissive, yet alert to the signs that the group is
disintegrating.

(c) Memory: He must have a good memory. The moderator must be able to remember the
comments of the participants.

(d) Encouragement: The moderator must encourage unresponsive members to participate.

(e) Learning: He should be a quick learner.


56

(f) Sensitivity: The moderator must be sensitive enough to guide the group discussion.

(g) Intelligence: He must be a person whose intelligence is above the average.

(h) Kind/firm: He must combine detachment with empathy.

4. Case Studies: Analysing a selected case sometimes gives an insight into the problem
which is being researched. Case histories of companies which have undergone a similar situation
may be available. These case studies are well suited to carry out exploratory research. However,
the results of investigation of case histories are always considered suggestive, rather than
conclusive.

5. Secondary data: A variety of secondary information sources is available to the


researcher gathering data on an industry, potential product applications and the market place.
Secondary data is also used to gain initial insight into the research problem. Secondary data
analysis saves time that would otherwise be spent collecting data and, particularly in the case
of quantitative data, provides larger and higher-quality databases than would be unfeasible for
any individual researcher to collect on their own. In addition to that, analysts of social and
economic change consider secondary data essential, since it is impossible to conduct a new
survey that can adequately capture past change and/or developments.

4.7 QUALITATIVE RESEARCH AND EXPLORATORY RESEARCH

Qualitative research seeks out the ‘why’, not the ‘how’ of its topic through the analysis of
unstructured information – things like interview transcripts, e-mails, notes, feedback forms,
photos and videos. It doesn’t just rely on statistics or numbers, which are the domain of
quantitative researchers. Qualitative research is used to gain insight into people’s attitudes,
behaviours, value systems, concerns, motivations, aspirations, culture or life-styles. Focus
groups, in-depth interviews, content analysis are among the many formal approaches that are
used, but qualitative research also involves the analysis of any unstructured material, including
customer feedback forms, reports or media clips. The strength of qualitative research is its
ability to provide complex textual descriptions of how people experience a given research issue.
Qualitative methods are also effective in identifying intangible factors, such as social norms,
socioeconomic status, gender roles, ethnicity, and religion, whose role in the research issue
may not be readily apparent. When used along with quantitative methods, qualitative research
can help us to interpret and better understand the complex reality of a given situation and the
implications of quantitative data.
57

The exploratory interview resembles in general form a participant, non-structured, free


response observational investigation. In research of this type, neither the questions nor the
allowable responses are constrained. As such, the less structured the interview, the greater
demands on the interviewer’s competence and theoretical grounding. Paradoxically, then, the
exploratory interview, while making only minimal demands in terms of data quality, calls for the
most highly qualified, technically competent researchers if it is to generate optimally useful
data. Thus, there is a necessity for professional personnel in the initial exploratory phase of the
investigation.

Self learning exercise:

1. Exploratory research is done using ___________ research.

2. In experience surveys, it is desirable to talk to persons who are well informed in the
area being ............................

3. Most of the companies conducting the ........................... groups first screen the
candidates to determine who will compose the particular group.

4. The moderator must encourage ........................ members to participate.

4.8 CAUSALITY AND EXPLORATORY RESEARCH

Causality is a way of looking at the world, giving it a specific order and putting it into
mental sequence. Causes, in short, beyond the trivial ones, are imprinted on the world by the
observer or researcher. Different researchers ask different questions and employ different
research methods – and thus produce different kinds of causality. In most exploratory research,
causal mechanisms are assumed to work sequentially, that is in temporary order, so that one
event causes, or produces, the next. When hypothesizing a causal mechanism first, however,
we can assess to what degree reality conforms to our expectations and thus reach reliable
statements about the importance, relevance, and magnitude of our causal mechanism. Once
we detect a causal mechanism behind the manifest behaviour, we can then stipulate and to
some degree assess how relevant this causal mechanism is. The statements we can formulate
based on this approach will be of the kind: if the causal mechanism x is present, it is very likely
that y will follow others remaining equal.
58

For eg: once we understand why a group of children are behaving aggressively towards
their siblings we can proceed to examine based on social learning theory of Bandura and start
examining how important this causal mechanism is elsewhere and in other situations. Exploratory
and inductive research thus allows for limited generalizations, not based on the outcome, but
on the presence, or partial presence, or shared causal mechanisms. If we find out that these
children were behaving aggressively because they are single born children, and when proceeding
this way, we thus generalize from the causal mechanism, not the outcome and by doing so we
add to our understanding of why these children are behaving aggressively. Such research also
enables us to then think about possible solutions to control aggressive behaviour – because we
understand what causes them. The prime way to assess the importance of causal mechanisms
is through conducting case studies. (George & Bennett, 2005). To accept the provisionality of
one’s conclusions and explanations about reality implies avoiding exclusive claims about reality.
It means recognizing, explicitly, that all explanations are partial, incomplete, and open to revision,
or that all theories are under-determining, leaving much room for alternative and competing
explanations even of the same segment of reality. If our theories and assumptions about the
world cannot close the gap that separates them from reality and if theories and hypotheses
have more to do with our own mental, social, and cultural conditions than with the objective
reality we experience and observe, then our theories and ideas only allow us to explain and
make sense of the world for ourselves. Empirical research, then, is an endeavour where a
researcher seeks to explain a well-defined segment of reality to him or herself.

Instead of advancing arguments that make exclusive claims about truth, exploratory
research offers more or less plausible and therefore fruitful ways to examine and explain a
limited segment of reality. Qualitative, inductive and hence exploratory research sets out to
explain limited segments of reality by suggesting a causal order, and sequence, of events. It
does not claim that this order is inherent in reality, but instead remains skeptical about the true
nature of causality in the world and only suggests a useful and helpful way to explain it by
putting in into causal order. Exploratory research thus assumes causal necessity in the world,
but only for the purpose of suggesting a helpful and useful way of explaining it. As this formulation
already suggests, usefulness is dependent on the aim of the research, as the first question
arising from this formulation is: useful for what and to whom? Exploratory inductive research
thus cannot escape a critical positioning of the researcher and his or her interest and positionality
with regard to the research conducted.
59

Unlike confirmatory research, exploratory research does not aim at testing these
hypotheses, since they cannot be proved. Exploratory research instead asks how much a theory
and a hypothesis can explain, how well it can explain it, or how meaningful and fruitful an
explanation is. Explorative research is successful if a previously formulated theory and a
hypothesis explain something very well, which means the explanation provides a strong and
robust connection between a cause and an outcome.

Exploratory research seeks to provide new explanations that have been previously
overlooked and it can do so through the active involvement of the researcher in the process of
amplifying his or her conceptual tools to allow him or her to raised new questions and provide
new explanations of a given reality from a new angle. As the process of “making sense” of a
phenomenon is a gradual process that can be compared to a learning process, exploratory
research is characterized by a process of reformulating and adapting explanations, theories,
and initial hypotheses inductively. It begins, in other words similar to deductive research, with
previously formulated theories - but it does not stop there. Instead, it uses empirical data to
refine, adapt, or specify and reformulate theories and initial hypotheses to the point that the
observed makes more sense to the observer and is thus explained better, i.e. in a more plausible
and consistent way. Instead of a pure discovery, we must content ourselves in this way, with a
gradual expansion of our conceptual tools of perception that allows us a better, or deeper
understanding of the world based on what we already know. Exploration thus starts at the same
place of deduction, namely with the explicit formulation of theories and hypotheses. But different
from deduction, exploration seeks to refine, adapt, or change the initial explanation in an itinerary
process of applying other explanations to the observation in a forth-and-back between theory
and reality.

4.9 ADVANTAGES OF EXPLORATORY RESEARCH


1. Flexibility and adaptability to change

2. Exploratory research is effective foundation or groundwork for future studies.

3. Exploratory studies helps one to understand whether the research will work or can
the researcher proceed in that direction.

4.10 DISADVANTAGES OF EXPLORATORY RESEARCH


1. Exploratory studies generate qualitative information and interpretation of such type
of information is subject to bias
60

2. These types of studies usually make use of small number of samples that may not
adequately represent the target population. Accordingly, findings of exploratory
research cannot be generalized to a wider population.

3. Findings of such type of studies are not usually useful in decision making in a
practical level.

4.11 SUMMARY

This lesson dealt with exploratory research, an important type of research in social sciences.
Day by day changes occur in the society and so the effect on individuals’ behaviour and attitude.
This type of research also helps to discover or uncover the unexplored problem which is crucial
in the developing country. Exploratory research, as the name implies, intends merely to explore
the research question and does not intend to offer final and conclusive solutions to existing
problems. This type of research is usually conducted to study a problem that has not been
clearly defined yet. Exploratory research studies helps one to fulfill the researcher’s curiosity
and need for greater understanding, to test the feasibility of starting a more in depth study, and
also to develop the methods to be used in the future researches.

4.12 KEY WORDS

Exploratory research: Exploratory research design explores the research questions,


leaving room for further researches.

Conclusive research: Conclusive studies identify the final information that is the only
solution to an existing research problem.

Causality: Causality is a way of looking at the world, giving it a specific order and putting
it into mental sequence. Causes, in short, beyond the trivial ones, are imprinted on the world by
the observer or researcher.

Experience Survey: In experience surveys, it is desirable to talk to persons who are well
informed in the area being investigated.
61

4.13 CHECK YOUR PROGRESS


1. What is an exploratory research?

2. Write the difference between exploratory and conclusive research.

3. Why exploratory studies mostly adopts qualitative method.

4. Write the purposes of exploratory research.

4.14 ANSWERS TO CHECK YOUR PROGRESS


1. Exploratory research, as the name implies, intends merely to explore the research
question and does not intend to offer final and conclusive solutions to existing
problems. This type of research is usually conducted to study a problem that has
not been clearly defined yet.

2. Refer In – text Content 4.2

3. Refer In – text Content 4.7

4. Refer In – text Content 4.3

4.15 MODEL QUESTIONS


1. Write the advantages and disadvantages of exploratory research.

2. Write the importance of exploratory research.

3. State the characteristics of exploratory research.

4. Explain the ways of formulating hypothesis in exploratory research.

REFERENCES

Bernd, R. (2017). “Theory and Methodology of Exploratory Social Science Research”


(2017). Government and International Affairs Faculty Publications. 132. Retrieved from: http:/
/scholarcommons.usf.edu/gia_facpub/132

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. London,
England: Routledge. COOLICAN

Coolican, H. (1994). Research Methods and Statistics in Psychology. London: Hodder &
Stoughton.
62

Crano, W. D., & Brewer, M. B. (2002). Principles and Methods of Social Research. Mahwah,
NJ: Lawrence Erlbaum Associates Publishers.

Festinger, L and Katz, D (1953). Research Methods in the Behavioural Sciences. New
York: Holt Rinehart and Winston.

Howitt, D., & Gramer, D. (2011). Introduction to Research Methods in Psychology. Harlow,
Essex: Pearson Education Inc.

Katz, D (1953). Field Studies. In L Festinger and D Katz (Eds) Research methods in the
Behavioural Sciences New York: Holt Rinehart and Winston.

Lovely Professional University. (2012). Research Methodology. Retrieved from: http://


ebooks.lpude.in/management/mba/term_2/DCOM408_DMGT404_RESEARCH_MET
HODOLOGY.pdf

Retrieved from: https://research methodology.net/research-methodology/research-design/


exploratory-research
63

LESSON - 5
CONCEPTUALIZING RESEARCH QUESTIONS
INTRODUCTION

After knowing about the qualitative and quantitative researches in the earlier chapters,
we need to understand about the conceptualization of research question. Both qualitative and
quantitative research are carried out with a research question based on the researcher’s interest
followed by research and theoretical evidence. The literature is collected from different types of
sources after which steps in conceptualizing the research question is done. Let us look into the
details of how to conceptualize research questions in this chapter.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand what is conceptualization

 To explain the steps in conceptualizing research questions

 To understand the sources of data and literature

PLAN OF THE STUDY


5.1 Conceptualization in Research

5.2 Steps in Conceptualization

5.3 Step One in Conceptualization

5.3.1 Secondary Sources: The Literature

5.3.1.1 Academic Journal Articles

5.3.1.2 Academic or Professional Books

5.3.1.3 Professional Reports

5.3.2 How to Use the Literature to Conceptualize

5.3.2.1 Steps in Conceptualization

5.3.3 Primary: Using Raw Data

5.3.4 Early Mapping: Mind, Concept and Literature Techniques


64

5.3.4.1 Mind Mapping

5.3.4.2 Concept Mapping

5.3.4.3 Literature Mapping

5.4 Step Two in Conceptualization

5.4.1 What is my Intention?

5.4.2 What is my Research Problem?

5.4.3 Evolution

5.4.4 The Challenge: Empirical or Theoretical

5.5 Summary

5.6 Key Words

5.7 Check Your Progress

5.8 Answers to Check your Progress

5.9 Model Questions

5.1 CONCEPTUALIZATION THE RESEARCH

Constructing and writing a research proposal should be regarded as the core preliminary
activity for a researcher. It is the process of arriving at a clear and direct statement of the
research problem. The problem is often articulated as the aim of the research. It may vary from
seeking answers to a relatively straightforward question to developing deep understandings of
complex analytical problems. Examples of the former can be found in survey-type research, the
what-is-where or who-does-what type of questions. often posed in the early stages of a study or
in the early phases of a new field of research, to provide base data and a sense of the patterns
of information available within the study (Neuman 2003). Research conducted outside academe
often, although not necessarily, stops at this stage (for example, market research or data collection
associated with local government planning or legislative requirements (Zikmund, 2003). However,
academic research seeks deeper understanding of the world, whether it is, for example, how
an ecosystem functions or what the underlying forces and influences were during a particular
historic event (Williams, 2000). Such deeper understanding demands closer analytical
techniques, and thus the aim of a research project the research problem, demands an already
deeper understanding of the discipline and of higher-order analytical techniques that may be
applied (Ashley and Boyd 2006).
65

Whatever the case, all research is motivated by questions arising from prior work or
observations of the world, and defines an issue that needs to be analysed at some level. The
research problem, therefore, needs to contain some fundamental properties (Cresswell 1994;
Maykut and Morehouse 1994; Jones et al. 2000; Flyvbjerg 2001; Robson 2002). It must: (1)
articulate a real and justifiable issue, grounded, in all likelihood, in prior research experience,
and thus worth doing; (2) clearly relate to a body of relevant prior knowledge within the given
discipline; (3) be formulated with a full understanding of the appropriate analytical tools; and (4)
make some potentially original contribution to knowledge and/or theory

Identifying the real and justifiable issue and appropriate analytical tools thus defines what
level of analysis is anticipated. In the growing world of research relevance, that is, where academic
research is seen as both a scholarly endeavour and a practical activity, providing potential
tangible commercial, social, economic or environmental outcomes. The research problem may
also be linked to expected outcome and outputs. An important starting point, however, is the
identification of a viable topic and of a core question around which the research will develop.
The research proposal, therefore, must be rooted in some question to be answered, some
problem to be solved, some issue to be addressed.

5.2 STEPS IN CONCEPTUALIZATION

Conceptualization is the process of not only selecting a topic, but formulating a feasible,
possible and practical research problem. Good conceptualization involves moving from a general
topic to a clear research problem. There are ten steps of conceptualization and conduction of
qualitative research spelt out by Chenail (2011). They are:

1. Reflect on What Interests You

2. Draft a Statement Identifying your Preliminary Area of Interest and Justifying Its
Scholarly and/or Practical Importance

3. Hone your Topic Focus

4. Compose your Initial Research Question or Hypothesis

5. Define your Goals and Objectives

6. Conduct a Review of the Literature

7. Develop your Research Design


66

8. Conduct a Self-assessment in Order to Determine What Strengths You Have That


Will Be Useful in your Study and What Skills You Will Need to Develop in Order to
Complete your Study

9. Plan, Conduct, and Manage the Study

10. Compose and Submit your Report

There are two essential steps to do a research:

1. Step One: Identification and conceptualization of research topic.

2. Step Two: Narrowing down the research topic based on possibility, feasibility and
practicality of solving the problem based on theory and research evidences.

5.3 STEP ONE IN CONCEPTUALIZATION

Researchers are deeply curious about the social world. Many budding researchers,
however, are interested in many topics that may or may not be related. Initially, pinning down a
topic is useful for guiding researchers toward the literature and some preliminary sources of
data. Initial exploration is essential to understand the background of the research. This part of
conceptualization helps for ground work which saves time and cuts down on mistakes. The
primary and secondary sources for starting a research are based on data. Secondary sources
are generally one step removed from the original event or people and include published academic
and professional articles, commonly referred to as ‘the literature’. Primary sources include
materials that are produced by, for, or about the people, group, organization or event under
study by persons who have direct and intimate knowledge or experiences. We also discuss the
possibility of conducting some preliminary data collection.

5.3.1 SECONDARY SOURCES: THE LITERATURE

The literature will be your first and important method for development of a research. The
literature includes three main sources: a) academic journal articles; b) academic or professional
books; and c) research reports.

5.3.1.1 Academic Journal Articles

The first and most common source is published journal articles. These articles are peer
reviewed and can be accessed through a variety of sources. The term ‘peer reviewed’ means
that the articles have been reviewed usually by two or three experts, and have likely been
67

screened by the editor of the journal. While journals vary in terms of the degree to which articles
are scrutinized, and in many cases rejected, the process provides a measure of quality control.
These academic journal articles may either be research articles or theoretical articles. Research
articles use primary (e.g., interviews conducted by the author) or secondary (e.g., archival
materials) sources of data to advance a particular original idea, argument or theory. Theoretical
articles rather than relying on primary or secondary data (though the author may refer to such
data) attempt to advance or critique a particular theoretical concept or framework, or make an
original theoretical contribution to the literature.

5.3.1.2 Academic or Professional Books

The literature also includes academic or professional books on your topic. There are four
main types of books. Firstly, academic or scholarly books include original research and chapters
that collect a variety of data to frame a particular issue or make an original contribution. Secondly,
popular original works that target a wider audience, but may still be authored by experts. Thirdly,
original or reprinted edited collections that can provide a different kind of breadth by marshalling
chapters from a variety of authors and perspectives on a particular topic. Edited collections can
include a series of original contributions such as previously unpublished data, concepts,
frameworks or theories. They can also include reprinted material either in its entirety (e.g., one
chapter that has been reprinted from a previously published book or article) or a summary of an
original contribution. Finally, scholarly encyclopedias which are typically produced for a particular
discipline or sub-field (e.g., Health), or around a particular theme (e.g., Social Welfare). These
sources will not provide you with a comprehensive examination of any one topic, but will provide
with a summary of hundreds of key terms, concepts, theories or methods, depending on the
focus of the encyclopedia. Such sources may help you formulate a handful of working definitions
that you can use when discussing your key terms or concepts. Most also include cross-references
and suggestions for further reading.

5.3.1.3 Professional Reports

Professional reports include published research, theory, review and working papers. Most
government agencies, think tanks, professional associations, advocacy groups or arms length
research consortiums produce professional reports that are widely available to the public online.
Examples of such government bodies or organizations include UNESCO, WHO, the US Census
Bureau, and the Ontario Ministry of Education. All of these agencies post online research articles,
executive summaries or press releases that are chock full of original and secondary data,
policy recommendations, and literature reviews.
68

5.3.2 HOW TO USE THE LITERATURE TO CONCEPTUALIZE

The literature, when used properly, can be a powerful conceptualization tool and can help
one to identify theories, terminologies or concepts, methods, or data (Maxwell, 2005). Once
identified with the key questions, theories and concepts that dominate the literature on the
topic, one can start to identify what is not known which will aid in conceptualization.

5.3.2.1 Steps in conceptualization:


1. Search the literature on your topic.

2. First identify key theories, terminologies, concepts, methods, data and interpretations
presented in the literature. Second identify what is not known, missing or problematic
in the literature.

3. Verify that your rendering of the literature is correct.

4. Start to narrow in on the one or two ‘holes’ that you have identified to construct your
research problem and research questions.

5.3.3 PRIMARY: USING RAW DATA

The use of primary sources of data is not limited to the ‘data collection’ phase of a project.
There are two main sources of primary data that are worth considering for conceptualization
purposes. The source is raw data collected from individuals or group. Beyond reviewing primary
data for conceptualization purposes, you can also consider how these data may capture important
dimensions of your topic and be used as data in their own right. The raw =data that you collect
or produce, sometimes referred to as a ‘pilot project’. Some preliminary fieldwork, interviews or
analysis of the materials is an excellent way to get an idea and to work out the direction and
focus of your project. Pilot projects are not only incredibly important to work out key data collection
instruments (e.g., an interview schedule) but can fundamentally shape the scope and direction
of a project.

5.3.4 EARLY MAPPING: MIND, CONCEPT AND LITERATURE TECHNIQUES

‘Mapping’ is routinely used in qualitative research, particularly at the beginning stages of


data analysis. Mapping is a ‘graphical tool for organizing and representing knowledge’ (Wheeldon,
2010: 90). Such visual aids can serve as a powerful tool at many stages of a project by allowing
(or forcing) researchers to classify and organize information in manageable form. Faced with
mountains of data, including interview transcripts, field notes, documents or pictures and videos,
69

researchers use this technique to identify the relationships, or organizational processes, and
the linkage between data and concept or theoretical ideas. Importantly, mapping allows
researchers to embed these understandings within a broader contextual framework. Ideally,
mapping requires researchers to think about their classification schemes, and the underlying
logic that guides their decision-making. Below, are the three kinds of mapping techniques: Mind
Mapping, Concept Mapping and Literature Mapping.

5.3.4.1 MIND MAPPING

Mind maps are usually organized around one central idea, concept or theme, and tend to
be more informal and flexible (Buzan & Buzan, 2000). Mind maps are perfect for researchers
who are newer to a topic. Mind maps allow researchers to get a handle on the central
characteristics, themes or concepts. Mind maps have the following characteristics:

 Visual representation of key themes, concepts, ideas, organizations, people, units


or theories.

 Built around one central idea or theme, as a flow chart or a as ‘tree’ diagram (Miles
& Huberman, 1994).

 The use of simple lines to articulate connections.

 The potential to use different shapes to symbolize different components (e.g., using
squares for organizations; circles for people) or different emphases (e.g., using
squares for components directly related to the core; circles for components on the
periphery).

 Flexible and less structured.

5.3.4.2 CONCEPT MAPPING

Concept mapping by contrast is more structured, and often includes multiple ideas,
concepts or themes as well as people, groups or organizations. Concept maps are suitable for
researchers who have a reasonable grasp of the literature or topic under study. Concept maps
are more structured and multifaceted, and based on an understanding of the context that they
will be used in (Novak & Cañas, 2006). Concept mapping includes structuring statements,
words, and people, groups or organizations based on either what is known or theorized about
the topic of interest. Concepts maps also include words, symbols and shapes to explain the
nature or strength of relationships between two or more units. Rather than flowing from one
70

concept or idea, concept maps represent multiple start points which may or may not be related
to every other unit. Concept maps have the following characteristics:

 A multi-hierarchical representation of information. Hierarchies may be based on


relative importance, a process, or moving from the general to the specific.

 ‘Information’ may include not only key ideas, concepts, characteristics and people,
groups or organizations, but also examples.

 The use of boxes, circles or other shapes to differentiate various kinds of information
(e.g., circles to represent theories and boxes to represent concepts).

 The use of cross-links which include simple lines, directional arrows or circles to
articulate a relationship between the various characteristics, outcomes and concepts/
ideas or units.

 The use of linking words (e.g., more, less), shapes (e.g., squares for countries,
circles for economic policies) or symbols (e.g., %, +) to explain or elaborate on a
particular relationship. The structure of the concept map and the nature of the
relationships are context dependent.

5.3.4.3 LITERATURE MAPPING

Similar to mind and concept mapping, in literature mapping the intention is to generate a
visual representation. Rather than focusing on key concepts, the point is to map out the literature
by theory, methods and data, time period, context, interpretation or emphases, or geography.
The goal is to identify similarities, connections, intersections, differences, and even holes in the
literature. These maps can be immensely useful for situating your study within the literature as
well as highlighting one or two representative articles, books or reports (Creswell, 2003: 39).
Beyond conceptualization, including a literature map (either in the body or as an appendix) in a
thesis, article or book can be a very effective tool. Literature maps have the following
characteristics:

· Organized around one central dimension of the literature, several dimensions of


the literature or as a multi-hierarchical representation of the literature.

· Literature may be organized in a variety of ways, including by theory or time period.


Literature may be represented in a manner similar to a mind or concept map or as
a chart.
71

· Literature maps in the spirit of mind or concept maps can use boxes, circles or other
shapes to differentiate various kinds of information.

· Literature maps in the spirit of mind or concept maps will use cross-links which
include simple lines, directional arrows or circles to articulate a relationship between
the various characteristics, outcomes and concepts/ideas or units.

General steps to a literature map

1. Start to categorize the literature you have found around some broad organizing
logic (e.g., by theory, method, time period, etc.)

2. Label each box or row based on your organizing logic (e.g., years 1850–1900)

3. Specify major publications. You may want to add a column that provides some kind
of description or detail .

4. Consider adding additional layers or rows/columns to include ‘sub-sub-topics’.

5. In the case of flow chart or ‘tree’ style literature maps, use lines to connect or signify
a shortcoming, strength, or synergy between two or more groupings of the literature
Abbott’s lists.

5.4 STEP TWO IN CONCEPTUALIZATION

Before we identify what a research problem is, it is necessary to identify what it is not. The
‘problem’ we are referring need not have anything on social justice dimension of our research.
So simply stating that a financial crisis created a lot of heartache does not sufficiently justify our
research work. A research problem is also not the same as your research questions. Research
questions are specific and focused inquires that derive from the research problem. Instead, the
research problem articulates the gap in the literature or conceptual and analytical shortcoming
that you plan on addressing in your research. Articulating the research problem will speak
directly to how you will eventually craft one’s purpose statement since it similarly forces one to
articulate ‘why you want to do the study and what you intend to accomplish’ (Locke et al., 2000).

5.4.1 What is my intention?

To answer the ‘What is my problem?’ question, researchers must first answer the ‘What
is my intention?’ question. The nature of the problem formulation will be very much shaped by
the kind of contribution you hope to make, a particular approach to research (e.g. more inductive)
and your intended audience. Steps to find out ‘What is my intention?’:
72

1. Identify your target audience. Your initial target audience will determine the range of
early problem formation strategies.

2. Based on your review of relevant literature and other resources, identify a research
problem based on what your specific audience already knows and wants to know.

3. Articulate your specific research intention in a way that aligns with your target
audience and research problem formation.

At the beginning stages of any project, it is hard to predict the potential impact of your
work. If you are fortunate, you may be pleasantly surprised when people beyond your initial
target audience like your work, including researchers from other disciplines or the media.
Additionally, as you become a more experienced researcher and writer, you will learn how to
package your research in a variety of ways. So starting off with a clear target audience, at least
in the interim, certainly does not limit a researcher from disseminating his/her findings more
widely. However, if you are less experienced, articulating your intended audience and purpose
will improve your chances of crafting a project that meets your more immediate research goals,
and inform how you write up or present your research. If your primary intention is to affect a
policy, then writing up your findings in a manner that relies too heavily on specialized terminology
or complicated theories from your discipline will be of little use.

5.4.2 What is my research problem?

Once the intention and immediate target audience are identified, the question of how you
plan on connecting and contributing to that group looms large. We first discuss five common
ways researchers can articulate their research problem. Strengthening your research problem
rationale also forces you to orient your project and address gaps in the literature; it may also
connect you to a potential research design. However, depending on the approach to qualitative
research, the problem formation may be developed at different stages of the project. We do not
seek to impose a specific timeline on when the research problem occurs, but rather stress the
importance of evolving your research problem formation in a manner that speaks to your audience
and to your approach.

Many of us are inspired by personal circumstances or experiences such as a family


member’s occupation, a difficult illness or an event such as a divorce. We are also motivated by
practical problems. Yet a personal or practical problem is not the same as a researchable
problem that will be of interest to your audience. Instead, you must build on your inspiration and
73

articulate the conceptual holes in the literature on that topic. To summarize, a personal problem
is not the same as a research problem unless you are able to communicate its wider scholarly
significance beyond your personal interests or experiences.

Most of us engage in what Kuhn (1962) referred to as ‘normal science’, an addition or


extension to the existing literature. There are certain type of studies that are perfectly reasonable
and can make a very valuable contribution to the literature either by reinforcing or extending
previous research in the area. Yet adding a new case does not automatically make for an
interesting research problem. If previous research on your topic has been largely conducted in
the China, simply adding a Indian case study is not a good enough problem rationale. You must
first articulate why the new case is a meaningful extension to the literature, why the new case is
a suitable addition or why it makes for an interesting point of similarity or comparison. To
summarize, you need to justify how your addition transforms our understanding of the topic
through new data, conceptual framework or methodology. One need to convince the audience
that the addition makes a significant contribution to the literature or addresses some wider
policy or public concern beyond fooling yourself that ‘more’ or ‘new’ data must mean ‘more’
understanding.

5.4.3 Evolution

Questions that deal with what or how something occurred, how it was experienced, or
how group members made sense of a particular event are routinely posed by qualitative
researchers. These types of inquiry also span theoretical approaches – from grounded theory
to more deductive process tracing (Bennett & Elman, 2006). Like quantitative researchers,
qualitative researchers can examine the process of a particular thing retrospectively; but unlike
quantitative researchers, qualitative researchers can examine how something evolves or is
experienced in real time. You may, for example, be interested in how patients experience a
particular healthcare protocol or how school staff implement a new bullying prevention
programme. But why should this be interesting to anyone? In summary, examining a process or
change is only useful if you are able to clearly articulate how it makes a meaningful extension to
the literature.

5.4.4 The challenge: empirical or theoretical

When articulating your research problem, we note the importance of outlining problems
or omissions from the literature. However, articulating a conceptual, methodological or theoretical
74

gap is not the same as throwing a metaphorical hand grenade and ducking for cover. Less
experienced researchers will often feel like they have to identify and demolish the literature with
a mocking review or an assertion that ‘no one has looked at X problem’ before. Such
proclamations are often wrong, are less sophisticated and quite frankly are usually not terribly
interesting. This is not to say that this tactic is not used, and used quite effectively, but such
arguments are usually advanced by someone after years of careful scholarship or after a major
research discovery. In summary, the relative weakness of the literature is more likely based on
less than ideal data, substandard data analysis, a failure to capture a dimension of the problem
at hand, or new evidence that casts some doubt on the original analysis.

5.5 SUMMARY

This chapter outlines the steps in conceptualization of research problem. We need to


present strategies for selecting a topic, including secondary and primary sources and various
kinds of concept or literature mapping techniques. Next, we should transform the research
topic into a research problem for a healthy scholarly investigation. The research problem is
required to be articulated in such a way that the importance of determining the audience and
developing a clear understanding of the conceptual, theoretical or empirical gaps in the literature.

5.6 KEY WORDS


Conceptualization: Conceptualization is the process of not only selecting a topic, but
formulating a feasible, possible and practical research problem.
Peer reviewed Journals: The term ‘peer reviewed’ means that the articles have been
reviewed usually by two or three experts, and have likely been screened by the editor of the
journal.
Professional reports: Professional reports include published research, theory, review
and working papers.
Mapping: ‘Mapping’ is routinely used in qualitative research, particularly at the beginning
stages of data analysis.
Mind maps: Mind maps are usually organized around one central idea, concept or theme,
and tend to be more informal and flexible
Concept maps: Concept maps are more structured, and often includes multiple ideas,
concepts or themes as well as people, groups or organizations.
75

Literature maps: Map out the literature by theory, methods and data, time period, context,
interpretation or emphases, or geography.

5.7 CHECK YOUR PROGRESS


1. What is conceptualization?

2. _________ is routinely used in qualitative research, particularly at the beginning


stages of data analysis.

3. What are the two steps in conceptualization?

5.8 ANSWERS TO CHECK YOUR PROGRESS


1. Conceptualization is the process of not only selecting a topic, but formulating a
feasible, possible and practical research problem.

2. Mapping

3. 1. Identification and conceptualization of research topic.

2. Narrowing down the research topic based on possibility, feasibility and practicality
of solving the problem based on theory and research evidences.

5.9 MODEL QUESTIONS


1. Write in detail the steps involved in conceptualizing a research problem?

2. What are the various secondary sources of data? Explain.

3. Discuss in detail the mapping techniques with suitable examples.

REFERENCES

Aurini,D.J., Heath, M., & Howells. (2016). The How to of Qualitative Research. Los Angeles:
Sage Publications.

Boyd, W.E. (2009). Formulating and conceptualizing the research problem. School of
Environmental Science and Management Papers. Retrieved from https://www.researchgate.net/
publication/41083811_Formulating_and_conceptualizing_the_research_problem

Chenail, R. J. (2011). Ten Steps for Conceptualizing and Conducting Qualitative Research
Studies in a Pragmatically Curious Manner. The Qualitative Report, 16(6), 1715-1732. Retrieved
from https://nsuworks.nova.edu/tqr/vol16/iss6/13
76

LESSON - 6
ISSUES OF PARADIGM
INTRODUCTION

Qualitative research seeks out the ‘why’, not the ‘how’ of its topic through the analyses of
unstructured information –like interview transcripts, e-mails, notes, feedback forms, photos
and videos. It does not rely on statistics or numbers, the domain of quantitative researchers.
Qualitative research is used to gain insight into people’s attitudes, behaviours, value systems,
concerns, motivations, aspirations, culture or life-styles. It is used to form business decisions,
policy formation, communication and research.

OBJECTIVES OF THE STUDY

After studying this lesson you will be able to:

 To understand the different types of research and the paradigm shift

 To emphasize the importance of scientific method

 To explain the meaning of theoretical sampling

PLAN OF THE STUDY


6.1 Paradigm

6.2 The Scientific Method

6.3 Paradigms as Human Constructions

6.4 Theoretical Sampling

6.5 Summary

6.6 Key Words

6.7 Check Your Progress

6.8 Answers to Check your Progress

6.9 Model Questions


77

6.1 PARADIGM

The Structure of Scientific Revolutions (1962), Thomas Kuhn used the term ‘paradigm’ in
two ways:

1. to represent a particular way of thinking that is shared by a community of scientists in


solving problems in their field and

2. to represent the “commitments, beliefs, values, methods, outlooks and so forth shared
across a discipline” (Schwandt, 2001, p. 183-4).

A paradigm is a way of describing a world view that is informed by philosophical


assumptions about the nature of social reality (known as ontology –i.e., what do we believe
about the nature of reality?), ways of knowing (known as epistemology –i.e., how do we know
what we know?), and ethics and value systems (known as axiology –i.e., what do we believe is
true?) (Patton, 2002). A paradigm, thus leads us to ask certain questions and use appropriate
approaches to systematic inquiry (known as methodology – i.e., how should we study the
world?). Ontology relates to whether we believe there is one verifiable reality or whether there
exist multiple, socially constructed realities (Patton, 2002). Epistemology inquires into the nature
of knowledge and truth by asking following questions: What are the sources of knowledge?
How reliable are these sources? What can one know? How does one know if something is
true? For instance, consider that some people think that the notion that witches exist is just a
belief. Epistemology asks further questions: Is a belief true knowledge? For example, if you say
witches exist, what is the source of your evidence? What are the methods can you use to find
out about their existence? Together, these paradigmatic aspects help to determine the
assumptions and beliefs that frame a researcher’s view of a research problem, how he/she
goes about investigating it and the methods he/she uses to answer the research questions.

A paradigm is a shared world view that represents the beliefs and values in a discipline
and that guides how problems are solved (Schwandt, 2001).

A paradigm may be viewed as a set of basic beliefs (or metaphysics) that deals with
ultimate or first principles. It represents a worldview that defines for its holder, the nature of the
“world”, the individual’s place in it and the range of possible relationships to that world and its
parts. The beliefs are basic in the sense that they must be accepted simply on faith (however
well argued); there is no way to establish their ultimate truthfulness. If there were, the philosophical
debates reflected here would have been resolved millennia ago.
78

6.2 THE SCIENTIFIC METHOD

In the context of describing the solid philosophical foundation of research, a little needs to
be said about the advancement of positivism and the ‘scientific method’. Positivism had a major
influence on the way social enquiry developed over the last century, and provides the wider
backdrop against which qualitative research evolved and matured. Indeed, it has been argued
that qualitative researchers often define their approach in opposition to the perceived tenets of
positivism and the ‘scientific method’ (Denzin & Lincoln, 2011).

A basic issue in all these philosophical debates surrounds the conception of ‘scientific’
investigation and what it constitutes. Indeed, a few have suggested that there is a ‘story book’
image of scientific enquiry (Reason and Rowan, 1981), a scientific ‘fairy tale’ (Mitroff, 1974), in
which depictions of the way scientific investigation is carried out bear no resemblance to the
reality of what innovative scientists actually do. There are also challenges to the idea that the
natural sciences – physics and mathematics in particular – should be taken as the originating
disciplines for defining what counts as ‘scientific’ (Hughes and Sharrock, 1997; Sloman, 1976).
Such debates have gained considerable momentum over recent decades and perhaps most
crucially now a body of literature which argues that the natural world is not as stable and law-
like as has been supposed (Firestein, 2012; Lewin, 1999; Ness, 2012; Williams, 2000) and that
scientists often employ inductive and deductive methods. All of these issues raise important
questions about the status of ‘scientific method’ around which so much epistemological debate
in the social sciences has taken place.

6.3 PARADIGMS AS HUMAN CONSTRUCTIONS

Paradigm represents simply the most informed and sophisticated view that its proponents
have been able to devise, given the way they have chosen to respond to the three defining
questions. And, we argue, the sets of answers given are in all cases human constructions; that
is, they are all inventions of the human mind and hence subject to human error. No construction
is or can be incontrovertibly right; advocates of any particular construction must rely on
persuasiveness and utility rather than proof in arguing their position. What is true of paradigms
is true of our analyses as well. Everything that we shall say subsequently is also a human
construction. The reader cannot be compelled to accept our analyses, or our arguments, on the
basis of incontestable logic or indisputable evidence; we just hope to be persuasive and to
demonstrate the utility of our position for in the public domain. (Guba & Lincoln, 1989; House,
1977).
79

In qualitative research paradigm, a multidisciplinary group is formed while conducting a


inquiry process for exploring and understanding social or human problems; and adopts
interpretative technique to explore the phenomena in naturalistic situation. It also explores the
subjective issues by adopting holistic approach and analyzes (physical, emotional, spiritual,
mental, social, environmental factors) the subject matter which in unclear or little is known
about particular phenomena. The qualitative researchers often collect a variety of empirical
materials by means of: case studies, personal experiences, introspective, life story interviews,
observation, historical, international, and visual texts that describe routine and problematic
moments and meaning in individuals’ lives.

There is paradigm conflict between the proponents of positivist paradigm (quantitative


researchers) and proponents of naturalistic paradigm (qualitative researchers). The proponents
of positivist paradigm have often undermined the usefulness of naturalistic paradigm and they
consider that the quality of a study depends on extent of establishing validity and reliability of
the measurements and the degree of generalizability of the findings. Therefore, the proponents
of positivist paradigm visualizes the qualitative research from their own positivist perspectives;
so it is often criticizes that the qualitative research lacks: representativeness (as the study
conducts in small sample, so findings cannot be generalize), replicability (as it cannot repeat/
replicate the findings in other settings), reliability (as consistent findings cannot be obtained as
they often use unstructured/semistructured instruments), and reactivity (as human beings often
react differently to a stimuli based on their mental mechanism, so consistent findings cannot be
obtained).

Moreover, the proponents of quantitative research paradigm further tries to proclaim and
confines the qualitative research activities in relation to the following:

 Due to prolonged and repeated data collection, often the ethical aspects of the
respondents violated

 Adopts emergent design without specified methodology decreases validity and


reliability of its findings

 Inform consent is often questioned, as it gathers subjective issues

 During in-depth interviews and observation confidentiality and anonymity cannot be


maintained

 Gathers sensitive issues which also violate the rights of the respondents
80

 Uses multi-methods for data collection and uses panel of multidisciplinary team for
data analysis and

 Style of reporting findings is rather tedious; as thick narrative voluminous information


is often presented.

However, the perspective of the naturalistic paradigm claims that the qualitative research
paradigm encompasses certain decisive features which are quite different from quantitative
research paradigm. As the qualitative research paradigm has certain critical uniqueness: it
adopts emergent design without predetermined structure; implements in naturalistic setting;
considers as a context-bound research; implements inductive reasoning of logic; explains
phenomenon of interest from holistic perspective; uncovers patterns of human behaviors/ realities;
believes on multiple realities in identifying the issues of a phenomenon; although it selects
small sample purposively, it provides detail subjective descriptions from the research participants;
try to understand the universe; data analysis is considered as a labor intensive work; data
collection and data analysis process proceeds side by side in a cursive manner; findings of the
study is analyzed into codes, category, concepts, and declarative themes; incorporates multiple
perspectives where the voices of respondents as well as key informants’ accounts; initiates
strong interaction between researchers and being researched; explicitly portraits and
acknowledge the value laden nature of the research; internal values and interest often emerges
from informants; takes longer period in exploring indepth information; determines accuracy by
verifying the information by “triangulation” among different investigators/methods of data
collection/ theories/ sources of data collection and produces thick narrative information.

Furthermore, the proponents of the naturalistic paradigm believes that the rigor of qualitative
research paradigm depends on five major components: (a) trustworthiness (overall reliability
and validity of qualitative research) (b) credibility (appropriateness, accuracy of data sources
and interpretations of findings including member checking) (c) transferability
(representativeness in terms of contextual boundaries of the findings, which enables to make
inferences about the transferability of the findings), (d) conform ability (as the researchers keep
detailed records (audit trail) of data collection methods as well as data analysis procedures to
reveal in detail regarding why, and how they arrived at their conclusions; and (e) Constancy/
Dependability (record of coding and analysis procedures such as compares, and revise codes
through their audit trail “inquiry auditor” by recordkeeping procedures as well as the products of
the investigation (findings and interpretations).
81

Furthermore, each research paradigm has its own pros and corns; in order to curtail
inherent weaknesses of each paradigm (both qualitative research paradigm and quantitative
research paradigm) a new research paradigm has been developed: a mixed research paradigm
- this is the combination of the quantitative as well as qualitative research paradigms. These
research paradigms need to be visualized based on their own perspectives, as these research
paradigms with their own decisive features are complementary rather than contradictory to
counteract the inherent constraints. One needs to evaluate these two paradigms from two
different views designated for each paradigm. As these two paradigms are equally important in
investigating the scientific knowledge; hence, both of these paradigms need to be employed in
harmonizing intrinsic limitations.

6.4 THEORETICAL SAMPLING

Theoretical sampling is a tool that allows the researcher to generate theoretical insights
by drawing on comparisons among samples of data. The data can include population, events,
activities, or even time periods. Data remain opaque if the researcher develops no sensitivity
among the potential differences and similarities among a variety of classes or samples of data.
More important, the choice of data samples allows the researcher to impute the theoretical
aspects of the research. For instance, data generated in a study of horseback riding by the
disabled might lack depth and understanding if the researcher chooses to ignore the kinds of
participants involved in the many aspects of this form of horseback riding such as the disabled
person (and this by age or gender), the parents or guardian of that person, the organizers of
horseback riding events, and those responsible for dressing the horses. The researcher might
also find it fruitful to conduct a theoretical sample of subgroups; namely, horseback riding of the
disabled in rural, semirural, and urban settings.

In theoretical sampling, data are collected on an ongoing, iterative basis, and the researcher
keeps on adding to the sample until there is enough data to describe what is going on in the
context or situation under study and until ‘theoretical saturation’ is reached. As one cannot
know in advance when this point will be reached, one cannot determine the sample size or
representativeness until one is actually doing the research. In theoretical sampling, data collection
continues until sufficient data have been gathered to create a theoretical explanation of what is
happening and what constitutes its key features. It is not a question of representativeness, but,
rather, a question of allowing the theory to emerge. Theoretical sampling as Glaser and Strauss
(1967) write, is the process of data collection for generating theory whereby the analyst jointly
82

collects, codes, and analyses his data and decides what data to collect next and where to find
them, in order to develop his theory as it emerges. This process of data collection is controlled
by the emerging theory. (Glaser & Strauss, 1967) They write that ‘the basic criterion governing
the selection of comparison groups for discovering theory is their theoretical relevance for
furthering the development of emerging categories’ (Glaser and Strauss 1967: 49) rather than,
for example, conventional sampling strategies.

A theoretical sample would bring into relief a variety of experiences that can be compared
to generate concepts and theory. The typical basic research process often does not allow a
researcher initially to set out the samples. Rather, as the researcher first deepens him- or
herself in the field setting, the potentiality of creating theoretical samples becomes more obvious.
In some cases, theoretical sampling involves further differentiations among classes of data
whether they pertain to activities, events, documents, or time periods. The theoretical sample is
a simple, but highly effective tool that can spark further insights because it can save time.
Moreover, the use of theoretical sampling forces the researcher into new directions, stretching
the diversity of data gathered for the purpose of developing concepts and theories.

6.5 SUMMARY

A paradigm thus leads us to ask certain questions and use appropriate approaches to
systematic inquiry on how to study the world. Qualitative and quantitative paradigms where
discussed highlighting the significance and the disadvantages of both. It is important to
understand the appropriateness of these paradigms and use a mixed methodology which would
give a clear and better picture of the research topic.

6.6 KEY WORDS

Paradigm: A paradigm is a shared world view that represents the beliefs and values in a
discipline and that guides how problems are solved

Theoretical sampling: Theoretical sampling is a tool that allows the researcher to generate
theoretical insights by drawing on comparisons among samples of data.

6.7 CHECK YOUR PROGRESS

1. Positivist paradigm is used in ___________ research and proponents of naturalistic


paradigm used ______ research .
83

2. ____________ reflects the need to ensure that the interpretations and findings match
the data.

6.8 ANSWERS TO CHECK YOUR PROGRESS

1. Quantitative and Qualitative

2. Confirmability

6.9 MODEL QUESTIONS

1. Write the issues of paradigms in relation to quantitative and qualitative research.

2. Write the importance of scientific method as a paradigm shift.

REFERENCES

Aurini, J.D., Heath, M., & Howelles, S. (2016). The how to of Qualitative research. New
Delhi: Sage Publications.

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. London,
England: Routledge.

Creswell, J. W . (1994). Research designs: Qualitative and quantitative approaches.


Thousand Oaks, CA: Sage

Denzin, N., & Lincoln, Y . (1994) (Eds.). Handbook of qualitative research. Newbury
Park, CA: Sage Publications.

Given, L.M. (2008). In Eds. The SAGE Encyclopedia of Qualitative Research Methods.
New Delhi: Sage Publications India Pvt. Ltd.

Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N.


K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 105-117). Thousand
Oaks, CA: Sage.

Howitt, D., & Cramer, D. (2011). Introduction to Research Methods in Psychology. England:
Pearson Education Ltd.
84

Lincoln, Y . S. (1992). Sympathetic connection between qualitative methods and health


research. Qualitative Health Research, 2(4), 375-391.

Lincoln, Y .S. & Guba, E .G . (1995). Naturalistic Inquiry. Thousand Oaks, CA: Sage,
1995.

Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard


Educational Review. 62(3), 279–300.

Merrium, B . S. (1998). Qualitative research and case study applications in education.

Neuman, W. L. (2014). Social Research Methods: Qualitative and Quantitative Approaches.


New York, NY: Pearson Education Ltd.

Ritchie, J., Lewis, J., Nicholls, C.M., & Ormston, R. (2013). Qualitative Research Practice:
A Guide for Social Science Students and Researchers. New Delhi: Sage Publications.
85

LESSON - 7
SINGLE SUBJECT DESIGN AND TIME SERIES
DESIGN
INTRODUCTION

True experimental studies generally include manipulated independent variables and either
equivalent groups for between-subjects designs or appropriate counterbalancing for within-
subjects designs. Anything less is quasi-experimental (‘‘almost’’ experimental). In general, a
quasi-experiment exists whenever causal conclusions cannot be drawn because there is less
than complete control over the variables in the study, usually because random assignment is
not feasible. Although it might seem that quasi-experiments are therefore lower in status than
‘‘true’’ experiments, it is important to stress that quasi-experiments have great value in applied
research. They do allow for a degree of control, they serve when ethical or practical problems
make random assignment impossible, and they often produce results that can have clear benefits
for people’s lives. In order to identify the cause and effect or the effectiveness of any intervention
or training we need to adopt a proper research design. Two important quasi experimental research
design are discussed in this chapter namely Single subject research design or N=1 design and
time series design.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand the importance of single subject design and its limitations.

 To explain the utility and outcome of time series design.

PLAN OF THE STUDY


7.1 Single Subject Design

7.1.1 Basic single-case experimental designs

7.1.2 Representation of data

7.1.3 Uses

7.1.4 Characteristics of single-subject experiments

7.1.5 Problems and limitations in single-subject designs


86

7.1.6 Differences between single-subject designs and traditional design


7.1.7 Advantages of single-subject designs

7.1.8 Disadvantages of single-subject designs

7.2 Time Series Design

7.2.1 Outcomes

7.3 Summary

7.4 Key Words

7.5 Check your Progress

7.6 Answers to Check your Progress

7.7 Model Questions

7.1 SINGLE SUBJECT DESIGN

The research design in the field of applied psychology, education and behaviour of human,
the subject servers as his/her own control rather than using another or different individual or
group is called single-subject design (also called single-case research design). Single-subject
design is used by the researchers since these designs are sensitive to individual organism
differences vs group designs that are sensitive to group average.

Often there will be large numbers of subjects in a research study using single-subject
design, however—since the subject serves as their own control, this is still a single-subject
design. These designs are used primarily to evaluate the effect of a variety of interventions in
applied research. In recent years, single-case research as an experimental methodology has
extended to such diverse fields such as clinical psychology, medicine, education, social work,
psychiatry and counselling. They involve the continuous assessment of some aspect of human
behaviour over a period of time, requiring on the part of the researcher, the administration of
measures on multiple occasions within separate phases of a study. They involve ‘intervention
effects’ which are replicated in the same subject(s).

7.1.1 BASIC SINGLE-CASE EXPERIMENTAL DESIGNS

There are two basic single-case experimental designs: the ABA and multiple baseline
designs.
87

ABA Designs: ABA refers to a specific type of research design in which one has a baseline
period where no treatment is given and/or no variable is introduced (A), followed by a period in
which the treatment or variable is introduced (B), and then a period in which the treatment is
removed so the behaviour can be observed a second time (A). This way one can measure
behaviour before treatment, during treatment, and once treatment is removed. ABA design is a
simple model to understand. By using this design, the researcher attempts to demonstrate an
independent variable affects behaviour, first by showing that the variable causes the target
behaviour to occur, then by showing that removal of the variable causes the behaviour to cease.
For obvious reasons, these are sometimes called reversal designs. In ABA designs, the participant
is first observed in the absence of the independent variable (the baseline or control condition).
The target behaviour is measured many times during this phase to establish an adequate
baseline for comparison. After the target behaviour is noticed to be relatively stable, the
independent variable is introduced and again the behaviour is observed. If the independent
variable influences behaviour and some change should be seen in the behaviour from the
baseline to the treatment period. (In many ways the ABA design can be regarded as an interrupted
time series design performed on a single participant). However, even if the behaviour changes
when the independent variable is introduced, the researcher should not hastily conclude the
effect that was caused by the independent variable. To reduce this possibility, the independent
variable is then withdrawn. If the independent variable is maintaining the behaviour, the behaviour
may return to its baseline level. The researcher can further increase his or her confidence that
the observed behavioural changes were due to the independent variable by replicating the
study with other participants. The design just described is an example of a simple participant
design in ABA designs. In this design, A represents a baseline period in which the independent
variable is not present, and B represents an experimental period.

The Multiple-Baseline Design: The multiple-baseline design also makes use of baseline
and treatment stages, but not by withdrawing a treatment. As the name suggests, researchers
establish several baselines when using a multiple-baseline design. The multiple-baseline design
demonstrates the effect of a treatment by showing that behaviour in more than one baseline
change following the introduction of a treatment. For example, if we test one person’s behaviour
in different situations. In this case, the first step in the multiple-baseline design is to record
behaviour (such as aggressiveness of a child) as it normally occurs in several situations (such
as at home, in the classroom, and at an after-school day-care facility). The researcher establishes
the baseline frequency of the behaviour in each situation (i.e., multiple baselines). Next the
treatment is introduced in one of the situations (e.g., situation at home), but not elsewhere. The
88

researcher continues to monitor behaviour in all of the situations. A critical feature of the multiple-
baseline design is that treatment is applied to only one baseline at a time. The behaviour in the
treated situation should improve; the behaviour in the baseline situations should not improve.
The next step is to apply the treatment in a second situation (treatment may continue in the first
situation as well) but leave the third situation as a continuing baseline. Behaviour should change
only in the treated situation, not in the baseline situation. The final step is to administer the
treatment in the third situation; again, the behaviour should change when the treatment is
administered in the third situation. The key evidence for the effectiveness of the treatment in
the multiple-baseline design is the demonstration of the behaviour changes only when the
treatment is introduced. There are several variations on the multiple-baseline design, depending
on whether multiple baselines are established for different individuals, for different behaviours
in the same individual, or for the same individual in different situations. Although they sound
complex, multiple-baseline designs are frequently used and easily understood. We will describe
each type of multiple baseline design using an applied research example. In the multiple-baseline
design across individuals, baselines are first established for different individuals. When the
behaviour of each individual has been stabilised, an intervention is introduced for one individual,
then for another individual, later for another, and so on. As in all multiple-baseline designs, the
treatment is introduced at a different time for each baseline for each individual. If the treatment
is effective, then a change in behaviour will occur immediately following the application of the
treatment in each individual.

In a multiple baseline design, baseline measures are established and then treatment is
introduced at different times. There are three varieties. Multiple baselines can be established
(a) for the same type of behaviour in several individuals, (b) for several behaviours within the
same individual, or (c) for the same behaviour within the same individual, but in several different
settings. There is no hard and fast rule about the number of baselines established per study,
but three is considered a minimum (Barlow, Nock, & Hersen, 2009). The logic is the same in all
cases. If the behaviour is responding to the treatment program that is being examined in the
study, then the behaviour should change when the program is put into effect, and only then. So,
if three different behaviours are being examined, and the treatment program is introduced for
every behaviour at three different times, then the behaviours should change only after the
program is introduced for each, and not before. If all three behaviours change when the program
is put into effect for the first behaviour, then it is hard to attribute the behaviour change to the
program—the changes in all three behaviours might be the result of history, maturation, perhaps
regression, or some other confound.
89

In interpreting, the general strategy of all single-subject research is to use the subject as
their own control. Experimental logic argues that the subject’s baseline behavior would match
its behavior in the intervention phase unless the intervention does something to change it. This
logic then holds to rule out confound, one needs to replicate. It is the within-subject replication
and allows for the determination of functional relationships. Thus the goal is:

· Demonstration

· Verification

· Replication

Self learning exercise

Identify one research done using single subject design and find out which type of single
subject design was used.

7.1.2 REPRESENTATION OF DATA

The preferred method of presenting the data from single-participant designs is with graphs
that show the results individually for each participant. Rather than testing the significance of the
experimental effects, single-participant researchers employ graphic analysis (also known simply
as visual inspection). Put simply, the single-participant researcher judges whether the independent
variable affected behavior by visually inspecting graphs of the data for individual participants. If
the behavioural changes are pronounced enough to be discerned through a visual inspection of
such graphs, the researcher concludes that the independent variable affected the participant’s
behaviour. If the pattern is not clear enough to conclude that a behavioural change occurred,
the researcher concludes that the independent variable did not have an effect.

7.1.3 USES

Many of the founders of behavioural science, Weber, Wundt, Pavlov, Thorndike,


Hermann Ebbinghaus and others-relied heavily on single participant approaches.

Single subject design is used to study the operant conditioning, like the effect of
reinforcement and also based on the various schedules of reinforcement.

Single-case experimental designs are also used by researchers who study


psychophysiological processes, as well as by those who study sensation and perception.
90

In applied research, single-participant designs have been used most frequently to study
the effects of behavior modification-techniques for changing problem behaviors that are based
on the principles of operant conditioning.

Single-participant research has also been used in industrial settings (to study the effects
of various interventions on a worker’s performance, for example) and in schools (to study the
effects of token economies on learning).

Finally, single-participant designs are sometimes used for demonstrational purposes, simply
to show that a particular behavioral effect can be obtained.

7.1.4 CHARACTERISTICS OF SINGLE-SUBJECT EXPERIMENTS


• Researchers manipulate an independent variable in single-subject experiments;
therefore, these designs allow more rigorous control than case studies.

• In single-subject experiments, baseline observations are first recorded to describe


what an individual’s behavior is like (and predicted to be like in the future) without
treatment.

• Baseline behavior and behavior following the intervention (treatment) are compared
using visual inspection of recorded observations.

7.1.5 PROBLEMS AND LIMITATIONS IN SINGLE-SUBJECT DESIGNS


• Interpreting the effect of a treatment can be difficult if the baseline stage shows
excessive variability or increasing or decreasing trends in behavior.

• The problem of low external validity with single-subject experiments can be reduced
by testing small groups of individuals.

As the name implies, single-subject designs are experimental research designs that can
be used with only one participant (or subject) in the entire research study. Single-subject designs
are also commonly called single-case designs. We use the term experimental to describe these
single-subject designs because the designs presented in this chapter allow researchers to
identify relatively unambiguous cause-and-effect relationships between variables. Although these
designs can be used with groups of participants, their particular advantage is that they provide
researchers with an option for data collection and interpretation in situations in which a single
individual is being treated, observed, and measured. This option is especially valuable when
researchers want to obtain cause-and-effect answers in applied situations. For example, a
91

clinician would like to demonstrate that a specific treatment actually causes a client to make
changes in behavior, or a school psychologist would like to demonstrate that a counseling
program really helps a student in academic difficulty.

Single-subject designs, or single-case designs, are research designs that use the results
from a single participant or subject to establish the existence of cause-and-effect relationships.
To qualify as experiments, these designs must include manipulation of an independent variable
and control of extraneous variables to prevent alternative explanations for the research results.

Historically, most single-subject designs were developed by behaviorists examining operant


conditioning. The behavior of a single subject (usually a laboratory rat) was observed, and
changes in behaviour were noted while the researcher manipulated the stimulus or reinforcement
conditions. Although clinicians have adopted the designs, their application is still largely
behavioural, especially in the field of applied behaviour analysis (previously called behaviour
modification). Despite this strong association with behaviourism, however, single-subject research
is not tied directly to any single theoretical perspective and is available as a research tool for
general application. The goal of single-subject research, as with other experimental designs, is
to identify cause-and-effect relationships between variables. In common application, this means
demonstrating that a treatment (variable 1) implemented or manipulated by the researcher
causes a change in the participant’s responses (variable 2). Although single-subject studies are
experimental, their general methodology incorporates elements of non-experimental case studies
and time-series designs. Like a case study, single-subject research focuses on a single individual,
and allows a detailed description of the observations and experiences related to that unique
individual. Like time-series research, the single-subject approach typically involves a series of
observations made over time. Usually, a set of observations made before treatment is contrasted
with a set of observations made during or after treatment. Although single-subject designs are
similar to descriptive case studies and quasi-experimental time-series studies, the designs
discussed in this chapter are capable of demonstrating cause-and-effect relationships and,
therefore, are true experimental designs.

A phase is a series of observations of the same individual under the same conditions.
When no treatment is being administered, the observations are called baseline observations. A
series of baseline observations is called a baseline phase and is identified by the letter A. When
a treatment is being administered, the observations are called treatment observations. A series
of treatment observations is called a treatment phase and is identified by the letter B.
92

A consistent level occurs when a series of measurements are all approximately the same
magnitude. In a graph, the series of data points cluster around a horizontal line. A consistent
trend occurs when the differences from one measurement to the next are consistently in the
same direction and are approximately of the same magnitude. In a graph the series of data
points cluster around a sloping line. The stability of a set of observations refers to the degree to
which the observations show a pattern of consistent level or consistent trend. Stable data may
show minor variations from a perfectly consistent pattern, but the variations should be relatively
small and the linear pattern relatively clear.

An ABAB design, also known as a reversal design, is a single-subject experimental design


consisting of four phases: (i) baseline phase, (ii) treatment phase, (iii) return-to-baseline phase,
and (iv) second treatment phase. The goal of the design is to demonstrate that the treatment
causes changes in the participant’s behaviour.

7.1.6 DIFFERENCES BETWEEN SINGLE-SUBJECT DESIGNS AND


TRADITIONAL DESIGN

There are three fundamental differences between single-subject designs and traditional
group designs.

1. The first and most obvious distinction is that single-subject research is conducted with
only one participant or occasionally a very small group.

2. Single-subject research also tends to be much more flexible than a traditional group
study. A single-subject design can be modified or completely changed in the middle of a study
without seriously affecting the integrity of the design, and there is no need to standardize treatment
conditions across a large set of different participants.

7.1.7 ADVANTAGES OF SINGLE-SUBJECT DESIGNS

The primary strength of single-subject designs is that they allow researchers to establish
cause-and-effect relationships between treatments and behaviours using only a single participant.

In Single-subject design, the research allows for the detailed description and individualised
treatment of a single participant, and allows a clinician/researcher to establish the existence of
a cause-and-effect relationship between the treatment and the participant’s responses.
93

7.1.8 DISADVANTAGES OF SINGLE-SUBJECT DESIGNS

Cause and effect relationship is demonstrated only for one participant. This simple fact
leaves researchers with some question as to whether the relationship can or should be
generalised to other individuals.

Single-subject designs come from the requirement for multiple, continuous observations.
If the observations can be made unobtrusively, without constantly interrupting or distracting the
participant, then there is no cause for concern. However, if the participant is aware that
observations are continuously being made, this awareness may result in reactivity or sensitization
that could affect the participant’s response. As a result, there is some risk that the participant’s
behaviour may be affected not only by the treatment conditions but also by the assessment
procedures.

7.2 TIME SERIES DESIGN

In the only method for assessing change and to compare conditions before and after the
treatment is introduced (or determine the amount of treatment absorbed by each participant)
and assess any differential outcomes as a result of these differences. If the only information
available on pretreatment conditions is a single measure taken near the onset of the new
program, serious problems of interpreting change are created. Consider, for example, a measure
of the incidence of violent crimes in one state for the year before and the year after the introduction
of a moratorium on capital punishment. Such a single pretest– posttest assessment of change
is impossible to interpret without some knowledge of the degree of fluctuation expected between
two measures in the absence of any true change. The change in rate between the two annual
figures may be interpreted in several different ways. It may represent an actual increase in
crime rate under conditions where capital punishment is removed as a deterrent. On the other
hand, it may simply reflect the normal year-to-year fluctuation in crime rates, which, by chance,
have taken the direction of an increase over this particular time period. Differences in measures
taken before and after the experimental intervention may simply reflect regression back to
normal rates. Some indication of the relative degree of change that occurs after an experimental
treatment may be obtained by comparing the change during the critical period with fluctuations
between comparable time periods prior to the experimental period, that is, by observing the
change within the context of a time-series analysis.
94

A time-series design has a series of observations for each participant before a treatment
or event and a series of observations after the treatment or event. A treatment is a manipulation
administered by the researcher and an event is an outside occurrence that is not controlled or
manipulated by the researcher. It can be represented as follows:

O O O X O O O

The intervening treatment or event (X) may or may not be manipulated by the researcher.
For example, a doctor may record blood pressure for a group of executives before and after
they complete relaxation training. Or, a researcher may evaluate the effect of a natural disaster
such as earthquake or flood on the wellbeing of a group of students by recording visits to the
school nurse for the months before and after the disaster. In one case the researcher is
manipulating a treatment (the relaxation training) and in the other case the researcher is studying
a non-manipulated event (an earthquake). A study in which the intervening event is not
manipulated by the researcher is sometimes called an interrupted time-series design.
Occasionally, a time-series study is used to investigate the effect of a predictable event such as
a legal change in the drinking age or speed limit. In this case, researchers can begin collecting
data before the event actually occurs. However, it often is impossible to predict the occurrence
of an event such as an earthquake, so it is impossible for researchers to start collecting data
just before one arrives. In this situation, researchers often rely on archival data such as police
records or hospital records to provide the observations for the time-series study.

In a time-series design, the pretest and posttest series of observations serve several
valuable purposes. First, the pretest observations allow a researcher to see any trends that
may already exist in the data before the treatment is even introduced. Trends in the data are an
indication that the scores are influenced by some factor unrelated to the treatment. For example,
practice or fatigue may cause the scores to increase or decrease over time before a treatment
is introduced. Similarly, instrumentation effects, maturation effects, or regression should produce
noticeable changes in the observations before treatment. On the other hand, if the data show
no trends or major fluctuations before the treatment, the researcher can be reasonably sure
that these potential threats to internal validity are not influencing the participants. Thus, the
series of observations allows a researcher to minimize most threats to internal validity. As a
result, the time-series design is classified as quasi-experimental.
95

7.2.1 OUTCOMES

The main advantage of a time series design is that it allows the researcher to evaluate
trends, predictable patterns of events that occur with the passing of time. For example, suppose
you were interested in seeing the effects of a 2-month antismoking campaign on the number of
teenage smokers in a community. The program might include some persuasion techniques,
peer counseling, showing the teens a smoked-out lung or two, and so on. Did the program
work? There certainly is a reduction in smoking from pre- to posttest, but it’s hard to evaluate it
in the absence of a control group. Yet, even without a control group, it might be possible to see
if the campaign worked if not one but several measures were taken both before and after the
program was put in place. This gives a clear picture of the effectiveness of programme over a
period of time.

Self learning exercise

State a hypothetical problem for which time series design might be appropriate.

7.3 SUMMARY

Here, we examined the characteristics of single-subject designs and time series design.
The general goal of single-subject research, like other experimental designs, is to establish the
existence of a cause-and-effect relationship between variables. The defining characteristic of a
single-subject study is that it can be used with a single individual, by testing or observing the
individual before and during or after the treatment is implemented by the researcher. Single-
subject research designs typically involve measuring the dependent variable repeatedly over
time and changing conditions (e.g., from baseline to treatment) when the dependent variable
has reached a steady or stable state. This approach allows the researcher to see whether
changes in the independent variable are causing changes in the dependent variable. Another
design that we discussed in this lesson is Time series designs which measures the dependent
variable on several occasions before and on several occasions after the quasi-independent
variable occurs. This design can also be used to single subject research design.

7.4 KEY WORDS

Quasi experiment: Quasi-experiment exists whenever causal conclusions cannot be


drawn because there is less than complete control over the variables in the study, usually
because random assignment is not feasible.
96

True experiment: True experimental include manipulated independent variables and has
complete control over the variables and also completely randomized.

Single subject design: The research design in the field of applied psychology, education
and behaviour of human, the subject servers as his/her own control rather than using another
or different individual or group is called single-subject design.

Time series design: A time-series design has a series of observations for each participant
before a treatment or event and a series of observations after the treatment or event.

7.5 CHECK YOUR PROGRESS


1. Name the two basic single subject research designs.

2. ABA design is also called as _______

3. What are the three goals of single subject design?

4. Single-subject designs were developed by behaviorists examining _____________

5. A __________ design has a series of observations for each participant before a


treatment.

6. A study in which the intervening event is not manipulated by the researcher is


sometimes called an ________ time-series design.

7.6 ANSWERS TO CHECK YOUR PROGRESS


1. ABA and multiple baseline designs.

2. Reversal designs

3. Demonstration, Verification, Replication

4. Operant conditioning

5. Time series

6. Interrupted

7.7 MODEL QUESTIONS


1. Write in detail an essay on single subject design.

2. Write the advantages and disadvantages of single subject design.

3. Explain the time series research design with special reference to the outcome of it.
97

REFERENCES

Beins, B. C., & McCarthy, M. A. (2012). Research Methods and Statistics. New Delhi,
India: Pearson Education Inc.

Brown, K.W., Cozby, P.C., Kee, D.W., & Worden, P.E. (1999). Research Methods in Human
Development. London: Mayfield Publishing Company.

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. London,
England: Routledge.

Cozby, P. C., & Bates, S. C. (2015). Methods in Behavioural Research (12th ed). New
York, NY: McGraw Hill Education.

Goodwin, J. C. (2010). Research in Psychology: Methods and Design. (6th ed). Hoboken,
NJ: John Wiley & Sons.

Gravetter, F. J., & Forzano, L-A. B. (2012). Research Methods for the Behavioural Sciences
(4th ed). Belmont, CA: Wadsworth Cengage Learning.

Kramp, M.K. (2004). Exploring Life and Experience Through Narrative Inquiry. In K.
deMarris & S.D. Lapan (Eds.), Foundations for Research Methods of Inquiry in Education and
the Social Sciences (103-122). NJ: Lawrence Erlbaum Associates, Inc., Publisher

Leary, M. R. (2001). Introduction to Behavioural Research (3rd ed). Needham Heights,


MA: Allyn and Bacon.

Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2012). Research Methods


in Psychology (9th ed). New York, NY: McGraw Hill Education.
98

LESSON - 8
MIXED METHOD RESEARCH
INTRODUCTION

Earlier chapters dealt with the different qualitative and quantitative methods of conducting
research. In order to make it even more effective and accurate we need to adopt both qualitative
and quantitative methods which are called mixed method research. However, the combining of
both the methods is very not an easy task. The researcher needs to have a good blend of
knowledge in both the types. Let us look into the intricacies of mixed method research in this
lesson.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand importance of mixed methods research

 To identify the origin of mixed methods research

 To key considerations for mixed methods research

PLAN OF THE STUDY


8.1 Mixed Methods Research

8.2 Origin of Mixed-Methods

8.3 Types of Mixed Method Designs

8.4 Eight Key Considerations

8.5 Credibility

8.6 Trustworthiness

8.7 Summary

8.8 Key Words

8.9 Check your Progress

8.10 Answers to Check your Progress

8.11 Model Questions


99

8.1 MIXED METHOD RESEARCH

In a primary level mixed methods study a researcher collects qualitative and quantitative
data directly from the research participants through interviews, observations, questionnaires,
and combines these diverse data in a single study. A synthesis level mixed methods study is a
systematic review that applies the principles of mixed methods research. This is referred to a
type of systematic review by the notion ‘mixed methods research synthesis’ (MMRS). Using
mixed methods approach combining qualitative and quantitative research elements is to integrate
these qualitative and quantitative research findings within a single systematic review. In
comparison, very little attention is paid to the possibilities of mixing qualitative and quantitative
methods at the synthesis level, although we could expect that the synthesis of qualitative and
quantitative research elements could lead to a more integrated, differentiated understanding
and insightful outcome (Creswell & Tashakkori 2007b; Dellinger & Leech 2007; Harden & Thomas
2005, 2010; Hart et al. 2009; Sandelowski et al. 2006; Voils et al. 2008).

It is generally understood that, at the most basic level, quantitative research involves the
collection and analysis of numerical data, whilst qualitative research considers narrative or
experiential data (Hayes et al., 2013). The term ‘mixed methods research’ is broadly accepted
to refer to research that integrates both qualitative and quantitative data within a single study
(Wisdom et al., 2012, Creswell and Clark, 2011). A key aspect of the definition of mixed methods
research is the ‘mixing’ of the qualitative and quantitative components within the study (Simons
and Lathlean, 2010, Maudsley, 2011). ‘Mixing’ refers to the process whereby the qualitative and
quantitative elements are interlinked to produce a fuller account of the research problem
(Glogowska, 2011, Zhang and Creswell, 2013). This integration can occur at any stage(s) of
the research process, but is vital to the rigor of the mixed methods research (Glogowska,
2011).

Mixed method and multi-method approach are totally different. Mixed methods research
combines qualitative and quantitative research in a single study, multi-method research involves
data collection using two methods from the same paradigm (e.g. interviews and focus groups,
surveys and medical record audit) (Andrew & Halcomb, 2009). In combining qualitative and
quantitative data collection, mixed methods research capitalises on the strengths of both
qualitative and quantitative research, whilst ameliorating their weaknesses to provide an
integrated comprehensive understanding of the topic under investigation (Scammon et al.,
2013, Wisdom et al., 2012, Andrew & Halcomb, 2009). In contrast to multi-method research,
100

which has only the advantage of collecting data using multiple methods, mixed methods research
has the potential to combine qualitative and quantitative characteristics across the research
process, from the philosophical underpinnings to the data collection, analysis and interpretation
phases.

8.2 ORIGINS OF MIXED-METHODS

Quantitative research (i.e., a positivist paradigm) has historically been the cornerstone of
social-science research. Qualitative purists support a constructivist or interpretivist paradigm.
Mixed methods research, the research paradigm that encourages the combined use of qualitative
and quantitative research elements to answer complex questions (Creswell 2003; Greene 2007;
Johnson and Onwuegbuzie 2004; Onwuegbuzie and Leech 2005; Tashakkori and Creswell
2007; Tashakkori and Teddlie 2003b).

Over the last two decades several authors have proposed typologies for designing mixed
methods designs at the primary level (Creswell et al. 2003; Creswell and Plano Clark 2007;
Tashakkori & Teddlie 2003). The motives behind the articulation of these typologies are diverse,
and include (1) presenting a flexible organizational structure for mixed methods research, (2)
developing conceptual frameworks that inform and guide the practice of mixed methods inquiry,
(3) offering credibility to the mixed methods field by providing successful examples, (4) providing
a common language for this field, and (5) facilitating and enhancing the instruction of courses
in mixed methods research (Collins and O Cathain 2009; Greene et al. 1989; Leech and
Onwuegbuzie 2009; Teddlie and Tashakkori 2006). These arguments for articulating mixed
methods typologies are likewise applicable to the primary- as to the synthesis level. However,
till date no such typology framework exist for the synthesis level.

8.3 TYPES OF MIXED METHOD DESIGN

The four major types of mixed methods designs are the Triangulation Design, the
Embedded Design, the Explanatory Design, and the Exploratory Design. The following sections
provide an overview of each of these designs: their use, procedures, common variants, and
challenges.

The Triangulation Design

The purpose of triangulation design is “to obtain different but complementary data on the
same topic” (Morse, 1991, p. 122) to understand the research problem at its best. The intent in
101

using this design is to bring together the differing strengths and non-overlapping weaknesses
of quantitative methods (large sample size, trends, generalization) with those of qualitative
methods (small N, details, in depth) (Patton, 1990). This design is used when a researcher
wants to directly compare and contrast quantitative statistical results with qualitative findings or
to validate or expand quantitative results with qualitative data. The Triangulation Design is a
one-phase design in which researchers implement the quantitative and qualitative methods
during the same timeframe and with equal weight. It generally involves the concurrent, but
separate, collection and analysis of quantitative and qualitative data so that the researcher may
best understand the research problem. The researcher attempts to merge the two sets of data,
typically by bringing the separate results together in the interpretation or by transforming data
to facilitate integrating the two data types during the analysis. The design makes it intuitive.
Each type of data can be collected and analyzed separately and independently, using the
techniques traditionally associated with each data type.

The Embedded Design

The Embedded Design is a mixed methods design in which one data set provides a
supportive, secondary role in a study based primarily on the other data type (Creswell, Plano
Clark, et al., 2003). The premises of this design are that a single data set is not sufficient, that
different questions need to be answered, and that each type of question requires different
types of data. Researchers use this design when they need to include qualitative or quantitative
data to answer a research question within a largely quantitative or qualitative study. This design
is particularly useful when a researcher needs to embed a qualitative component within a
quantitative design, as in the case of an experimental or correlational design. In the experimental
example, the investigator includes qualitative data for several reasons, such as to develop a
treatment, to examine the process of an intervention or the mechanisms that relate variables,
or to follow up on the results of an experiment. The Embedded Design mixes the different data
sets at the design level, with one type of data being embedded within a methodology framed by
the other data type (Caracelli & Greene, 1997). The Embedded Design includes the collection
of both quantitative and qualitative data. An Embedded Design can use either a one-phase or a
two-phase approach for the embedded data and the quantitative and qualitative data are used
to answer different research questions within the study (Hanson et al., 2005). This design may
be logistically more manageable for graduate students because one method requires less data
than the other method. The intent of the Embedded Design is not to converge two different data
sets collected to answer the same question. Researchers using an Embedded Design can
102

keep the two sets of results separate in their reports or even report them in separate papers.

The Explanatory Design

The Explanatory Design is a two-phase mixed methods design. The overall purpose of
this design is that qualitative data helps explain or build upon initial quantitative results (Creswell,
Plano Clark, et al., 2003). This design is well suited to a study in which a researcher needs
qualitative data to explain significant (or insignificant) results, outlier results, or surprising results
(Morse, 1991). This design can also be used when a researcher wants to form groups based on
quantitative results and follow up with the groups through subsequent qualitative research
(Morgan, 1998; Tashakkori & Teddlie, 1998) or to use quantitative participant characteristics to
guide purposeful sampling for a qualitative phase (Creswell, Plano Clark, et al., 2003). The
Explanatory Design (also known as the Explanatory Sequential Design) is a two-phase mixed
methods design. This first phase is the collection and analysis of quantitative data. The second,
qualitative phase of the study is designed so that it follows from (or connects to) the results of
the first quantitative phase. Because this design begins quantitatively, investigators typically
place greater emphasis on the quantitative methods than the qualitative methods. This two-
phase structure makes it straightforward to implement, because the researcher conducts the
two methods in separate phases and collects only one type of data at a time. This means that
single researchers can conduct this design; a research team is not required to carry out the
design.

The Exploratory Design

As with the Explanatory Design, the intent of the two-phase Exploratory Design (see
Figure 4.4a) is that the results of the first method (qualitative) can help develop or inform the
second method (quantitative) (Greene et al., 1989). This design is based on the premise that
an exploration is needed for one of several reasons: Measures or instruments are not available,
the variables are unknown, or there is no guiding framework or theory. Because this design
begins qualitatively, it is best suited for exploring a phenomenon (Creswell, Plano Clark, et al.,
2003). This design is particularly useful when a researcher needs to develop and test an
instrument because of non-availability (Creswell, 1999; Creswell et al., 2004) or identify important
variables to study quantitatively when the variables are unknown. It is also appropriate when a
researcher wants to generalize results to different groups (Morse, 1991), to test aspects of an
emergent theory or classification (Morgan, 1998), or to explore a phenomenon in depth and
then measure its prevalence. Like the Explanatory Design, the Exploratory Design is also a
103

two-phase approach, and writers refer to it as the Exploratory Sequential Design (Creswell,
Plano Clark, et al., 2003). This design starts with qualitative data, to explore a phenomenon,
and then builds to a second, quantitative phase. Researchers using this design build on the
results of the qualitative phase by developing an instrument, identifying variables, or stating
propositions for testing based on an emergent theory or framework. These developments connect
the initial qualitative phase to the subsequent quantitative component of the study. Because the
design begins qualitatively, a greater emphasis is often placed on the qualitative data. This
design has two common variants: the instrument development model and the taxonomy
development model. Each of these models begins with an initial qualitative phase and ends
with a quantitative phase. The separate phases make this design straightforward to describe,
implement, and report. Although this design typically emphasizes the qualitative aspect, the
inclusion of a quantitative component can make the qualitative approach more acceptable to
quantitative-biased audiences.

8.4 EIGHT KEY CONSIDERATIONS

Mixed methods research is much more than just collecting qualitative and quantitative
data within a single study. To ensure the rigor of the design the methodological approach to
mixed methods research requires a number of issues to be considered in its application. Eight
key considerations in planning and undertaking mixed methods research are presented here
for the novice researcher, namely;

1) examine the rationale for using mixed methods;

2) explore the philosophical approach;

3) understand the various mixed method designs;

4) assess the skills required;

5) review project management considerations;

6) plan and justify the integration of qualitative and quantitative aspects;

7) ensure that rigour is demonstrated;

8) disseminate mixed methods research proudly.

There are two concepts that are important while conducting research especially mixed
method research. They are credibility and trustworthiness.
104

8.5 CREDIBILITY

Credibility can be defined as the methodological procedures and sources used to establish
a high level of harmony between the participants’ expressions and the researcher’s interpretations
of them. The basic notion with credibility is that both the readers and participants must be able
to look at the research design and have it make sense to them. Questions for the researcher to
consider in relation to credibility include the following: Were the appropriate participants selected
for the topic? Was the appropriate data collection methodology used? Were participant responses
open, complete, and truthful? The credibility of the study could be enhanced by having a larger
focus group, introducing private interviews with the participants, and then providing opportunities
for follow-up interviews as necessary. The researcher can use the following methodological
procedures to increase credibility:

 Time: Establish enough contact with the participants and the context to get the
information one needs.

 Angles: Look at the data from different perspectives and viewpoints to get a holistic
picture of the environment.

 Colleagues: Use support networks knowledgeable in the area to review and critique
the research and data analysis findings.

 Triangulation: Seek out multiple sources of data and use multiple data-gathering
techniques.

 Member checks: Use the participants to make sure that the data analysis is accurate
and consistent with their beliefs and perceptions of the context being studied.

8.6 TRUSTWORTHINESS

In qualitative research, trustworthiness has become an important concept because it allows


researchers to describe the virtues of qualitative terms outside of the parameters that are typically
applied in quantitative research. Hence, the concepts of generalizability, internal validity, reliability,
and objectivity are reconsidered in qualitative terms. These alternative terms include
transferability, credibility, dependability, and confirmability. In essence, trustworthiness can be
thought of as the ways in which qualitative researchers ensure that the trustworthiness
components are evident in the research. Moving away from the quantitatively oriented terms
allows qualitative researchers the freedom to describe their research in ways that highlight the
overall rigor of qualitative research without trying to force it into the quantitative model.
105

To understand the differences between these quantitative and qualitative terms, it is helpful
to compare the parallel concepts. To start, transferability and generalizability can be compared.
Although generalizability refers to situations where research findings can be applied across the
widest possible contexts, transferability reflects the need to be aware of and to describe the
scope of one’s qualitative study so that its applicability to different contexts (broad or narrow)
can be readily discerned. In this way, a study is not deemed unworthy if it cannot be applied to
broader contexts; instead, a study’s worthiness is determined by how well others can determine
(i.e., through a paper trail) to which alternative contexts the findings might be applied. Credibility
and internal validity are also considered to be parallel concepts. A study possesses internal
validity if the researchers have successfully measured what they sought to measure. In contrast,
a credible study is one where the researchers have accurately and richly described the
phenomenon in question. Here, instead of ensuring that one has measured what one set out to
measure, one is making sure that they have accurately represented the data.

The next pair to be considered is objectivity and confirmability. In an objective study, the
data must be considered unbiased. Confirmability, on the other hand, reflects the need to ensure
that the interpretations and findings match the data. That is, no claims are made that cannot be
supported by the data. Finally, reliability-reproducibility and dependability can also be compared.
Findings are considered to be reproducible if they can be replicated exactly when using the
same context and procedure. Achieving reproducibility or reliability in this way can be challenging
for the qualitative researcher who studies the constantly changing social world. As a result,
dependability becomes a more realistic notion in the qualitative context. Here, the researcher
lays out his or her procedure and research instruments in such a way that others can attempt to
collect data in similar conditions. The idea here is that if these similar conditions are applied, a
similar explanation for the phenomenon should be found. In sum, trustworthiness provides
qualitative researchers with a set of tools by which they can illustrate the worth of their project
outside the confines of the often ill-fitting quantitative parameters.

8.7 SUMMARY

Mixed methods research helps researchers to gain a deeper understanding of complex


topic via the use of either quantitative or qualitative data on its own. Researchers who use
mixed methods, however, should carefully plan their research from a qualitative, quantitative
and mixed methods perspective. This mixed methods research originated from a primary level
and synthesis level. There are certain key points to be considered while doing a mixed methods
106

research. When these were taken care the mixed methods research will prove to be an effective
one.

8.8 KEYWORDS

Mixed methods research: Mixed methods research is broadly accepted to refer to


research that integrates both qualitative and quantitative data within a single study.

Multi-method approach: Multi-method research involves data collection using two


methods from the same paradigm.

Triangulation: The Triangulation Design is a one-phase design in which researchers


implement the quantitative and qualitative methods during the same timeframe and with equal
weight.

Embedded design: The Embedded Design is a mixed methods design in which one
data set provides a supportive, secondary role in a study based primarily on the other data type.

Explanatory design: The purpose of this design is that qualitative data helps explain or
build upon initial quantitative results:

Exploratory design: This design is based on the premise that an exploration is needed
when measures or instruments are not available, the variables are unknown, or there is no
guiding framework or theory.

Credibility: Credibility can be defined as the methodological procedures and sources


used to establish a high level of harmony between the participants’ expressions and the
researcher’s interpretations of them.

Trustworthiness: Trustworthiness can be thought of as the ways in which qualitative


researchers ensure that transferability, credibility, dependability, and confirmability are evident
in their research.

8.9 CHECK YOUR PROGRESS


1. What is the difference between mixed method research and multimethod approach.

2. ____________ refers to situations where research findings can be applied across


the widest possible contexts
107

3. State the methodological procedure that can be adopted to increase credibility.

4. Name the different types of mixed method design.

8.10 ANSWERS TO CHECK YOUR PROGRESS


1. Mixed methods research is broadly accepted to refer to research that integrates
both qualitative and quantitative data within a single study. On the other hand, multi-
method research involves data collection using two methods from the same
paradigm.

2. Generalizability

3. Time, angles, colleagues, triangulation, member checks

4. Triangulation, Embedded, Explanatory and Exploratory

8.11 MODEL QUESTIONS


1. Write in detail the meaning of mixed method research and its origin

2. What are the key considerations to be considered for conducting mixed methods
research?

3. Write in detail the four types of mixed method research with its significance and
strengths.

3. Discuss the importance of credibility and trustworthiness in conducting mixed


methods research.

REFERENCES

Creswell, J. W . (1994). Research designs: Qualitative and quantitative approaches.


Thousand Oaks, CA: Sage

Creswell, J. W., Plano Clark, V. L., Gutmann, M., & Hanson, W. (2003). Advanced mixed
methods research designs. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods
in social and behavioral research (pp. 209–240). Thousand Oaks, CA: Sage.

Denzin, N., & Lincoln, Y . (1994) (Eds.). Handbook of qualitative research. Newbury
Park, CA: Sage Publications.
108

Given, L.M. (2008). In Eds. The SAGE Encyclopedia of Qualitative Research Methods.
New Delhi: Sage Publications India Pvt. Ltd.

Halcomb, E. & Hickman, L. (2015). Mixed methods research. Nursing Standard: promoting
excellence in nursing care, 29 (32), 41-47.

Heyvaert, M., Maes, B., & Onghena, P. (2011). Applying mixed methods research at the
synthesis level: An overview. Research in the Schools, 18(1), 12-24.

Heyvaert, M., Maes, B., & Onghena, P. (2013). Mixed methods research synthesis:
Definition, framework, and potential. Quality & Quantity, 47, 659-676. doi:10.1007/s11135011-
9538-6

Terrell, S.R. (2012). Mixed-Methods Research Methodologies, The Qualitative Report,


17(1), 254-280. Retrieved from: http://www.nova.edu/ssss/QR/QR17-1/terrell.pdf
109

LESSON - 9
SAMPLING
INTRODUCTION

In earlier lessons we discussed about the qualitative and quantitative research in social
sciences. Quantitative research is generally done with more number of sample. Selecting the
required number of sample from the Universe under study is the tough task. There are different
ways to select those sample from the population taken for the study. This lesson brings out the
different sampling methods used in quantitative research. Researchers usually cannot make
direct observation of every individual in the population under study. Instead, they collect data
from a subset of individuals- a sample – and use those observations to make inferences about
the entire population. Researcher’s conclude from the sample and apply it to the entire population.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand the importance of sample

 To differentiate between sample and population

 To explain the different sampling methods and its advantages and disadvantages

PLAN OF THE STUDY


9.1 Sample and Population

9.2 Definition of Sample and Population

9.3 Sampling Methods

9.3.1 Probability sampling methods

9.3.1.1 Convenience sampling

9.3.1.2 Quota sampling

9.3.1.3 Purposive sampling

9.3.1.4 Snowball sampling

9.3.2 Non-probability sampling methods

9.3.2.1 Simple random sampling


110

9.3.2.2 Systematic sampling

9.3.2.3 Stratified random sampling

9.3.2.4 Cluster sampling

9.3.2.5 Proportionate Stratified Random Sampling

9.4 Importance of Sampling

9.5 Sample Size

9.6 Summary

9.7 Key Words

9.8 Check your Progress

9.9 Answers to Check your Progress

9.10 Model Questions

9.1 SAMPLE AND POPULATION

Sampling is the process of selection of units (e.g. people, organisation) from a population
of interest so that by studying the sample may fairly generate results back to the population
from which they were chosen. The goal of any research study is to generalize the findings done
on a sample to the population. The generalization depends on the representativeness of the
sample. The degree of representativeness of a sample refers to how closely the sample is
similar to the population. Thus, a researcher needs to identify a proper method to identify
reasonable representation of the population. To generalize the results of a study to a population,
the researcher must select a representative sample. Before selecting the sample, however, we
need to look into the accessibility of the population or the target group and even if accessibility
is there, the major threat to selecting a representative sample is bias.

The representativeness of a sample refers to the extent to which the characteristics of the
sample accurately reflect the characteristics of the population. A biased sample is a sample
with different characteristics from those of the population. Selection bias or sampling bias occurs
when participants or subjects are selected in a manner that increases the probability of obtaining
a biased sample. Therefore, one can choose a large sample to eliminate or reduce the bias in
selection and one helpful guide is to review published reports of similar research studies to see
how many participants they used, keeping in mind that a larger sample tends to be more
representative.
111

9.2 DEFINITION OF SAMPLE AND POPULATION

According to Young (1992) “A statistical sample is miniature picture of cross selection of


the entire group or aggregate from which the sample is taken”. According to Goode and Hatt
(1981) “A sample, as the name implies, is a smaller representative of a large whole”. According
to Blalock (1960) “It is a small piece of the population obtained by a probability process that
mirrors with known precision, the various patterns and sub-classes of population”.

Population refers to the whole that include all observations or measurements of a given
characteristic. Population is also called universe or population. A population may be finite or
infinite. A finite population is one where all the members can be easily counted. An infinite
population is one whose size is unlimited, and cannot count easily. Population of college teachers
is an example of finite population and production of wheat, and fishes in river are the example
of infinite population. A measure based upon the entire population is called a parameter. On the
other hand, sample is any number of persons selected to represent the population according to
some rule of plan. A measure based upon a sample is known as a statistic. Sample size is the
number of selected individual for example, no. of students, families from whom one obtain the
require information and usually denoted by the letter (n).

The objective of sampling is to obtain the desired information about the population at the
minimum cost or with the maximum reliability. Bias in the selection of sample can take place if:
(a) the researcher selects the sample by non random method and influenced by human choice.
(b) The researcher does not cover the sampling population accurately and completely (c) A
section of a sample population is impossible to find or refuses to cooperate.

9.3 SAMPLING METHODS

Blalock (1960) indicated that most sampling methods could be classified into two
categories: i) Non probability sampling methods ii) Probability sampling methods

9.3.1 Non Probability Sampling Methods

Non probability sampling is one in which the no probability is considered of the element or
group of elements, of population being included in the sample. In other words, non-probability
sampling methods are those that provide no basis for estimating how closely the characteristics
of sample approximate the parameters of population from which the sample had been obtained.
This is because non probability sample do not use the techniques of random sampling. In non
112

probability sampling, the odds of selecting a particular individual are not known because the
researcher does not know the population size and cannot list the members of the population.
Important techniques of non probability sampling methods are:

9.3.1.1 Convenience Sampling

The most commonly used sampling method in behavioral science research is convenience
sampling. People are selected on the basis of their availability and willingness to respond.
Convenience sampling is considered a weak form of sampling because the researcher makes
no attempt to know the population or to use a random process in selection. The researcher
exercises very little control over the representativeness of the sample and, therefore, there is a
strong possibility that the obtained sample is biased. The sample selected using this method
are probably not representative of the general population. Despite this major drawback,
convenience sampling is probably used more often than any other kind of sampling. It is an
easier, less expensive, more timely technique than the probability sampling techniques, which
involve identifying every individual in the population and using a laborious random process to
select participants. Finally, although convenience sampling offers no guarantees of a
representative and unbiased sample, this type of sampling is not a flaw. Most researchers use
two strategies to help correct most of the serious problems associated with convenience sampling.
First, researchers try to ensure that their samples are reasonably representative and not strongly
biased. For example, a researcher may select a sample that consists entirely of students from
an Introductory Psychology class at a small college in Chennai. However, if the researcher is
careful to select a broad cross-section of students (males and females, different ages, different
levels of academic performance, and so on), it is sensible to expect this sample to be reasonably
similar to any other sample of college students that might be obtained from other academic
departments or other colleges around the state. The second strategy that helps minimize potential
problems with convenience sampling is simply to provide a clear description of how the sample
was obtained and who the participants are. For example, a research report may state that in a
sample of 100 students, 67 females and 33 males, all between the ages of 18 and 22, was
obtained from the Introductory Psychology class at a large Midwestern State University.

9.3.1.2 Quota Sampling

One method for controlling the composition of a convenience sample is to use some of
the same techniques that are used for probability sampling. In the same way that we used
stratified sampling to ensure that different subgroups are represented equally, quota sampling
113

can ensure that subgroups are equally represented in a convenience sample. For example, a
researcher can guarantee equal groups of boys and girls in a sample of 30 preschool children
by establishing quotas for the number of individuals to be selected from each subgroup. Rather
than simply taking the first 30 children, regardless of gender, who agree to participate, a quota
is imposed which consists of 15 girls and 15 boys. After the quota of 15 boys is met, no other
boys have a chance to participate in the study. In this example, quota sampling ensures that
specific subgroups are adequately represented in the sample. Specifically, a researcher can
adjust the quotas to ensure that the sample proportions match a predetermined set of population
proportions. For example, a researcher could ensure that a sample contained 30% males and
70% females to match the same proportions that exist in a specific population. We should note
that quota sampling is not the same as stratified and proportionate stratified sampling because
it does not randomly select individuals from the population. Instead, individuals are selected on
the basis of convenience within the boundaries set by the quotas. It also is possible for a
convenience sample to use techniques borrowed from systematic sampling or cluster sampling.

9.3.1.3 Purposive (Judgmental) Sampling

At times, a researcher may not feel the need to have a random sample. If the investigator
is interested in a particular type of person, somebody with special expertise, the investigator
may try to find as many such people as possible and study them. The result is descriptive
research that may say a lot about this group of experts. For instance, one use of purposive
sampling would be to study a group of the best and the worse workers in a company. The
sample would not be random, but it would give an interesting look at differences in behaviors of
employees in the two extreme categories. The problem with such a sample is the same as with
any other nonprobability sample. This approach is sometimes called purposive (judgmental)
sampling because it relies on the judgment of the researcher and a specific purpose for identifying
participants.

9.3.1.4 Snowball sampling

Snowball sampling is also known as network, chain referral or reputation sampling method.
Snowball sampling which is a non probability sampling method is basically sociometric. It begins
by the collection of data on one or more contacts usually known to the person collecting the
data. At the end of the data collection process (e.g., questionnaire, survey, or interview), the
researcher asks the respondent to provide contact information for other potential respondents.
These potential respondents are contacted and provide more contacts. Snowball sampling is
114

most useful when there are very few methods to secure a list of the population or when the
population is unknowable. Snowball sampling has some advantages— 1) Snowball sampling,
which is primarily a sociometric sampling technique, has proved very important and is helpful in
studying small informal social group and its impact upon formal organisational structure, 2)
Snowball sampling reveals communication pattern in community organisation concepts like
community power; and decision-making can also be studied with he help of such sampling
technique. Snowball sampling has some limitations also— 1) Snowball sampling becomes
cumbersome and difficult when is large or say it exceeds 100, 2) This method of sampling does
not allow the researcher to use probability statistical methods. In fact, the elements included in
sample are not randomly drawn and they are dependent on the subjective choices of the originally
selected respondents. This introduces some bias in the sampling.

Self learning exercise

State one example of choosing a sample using each non probability sampling method.

9.3.2 Probability Sampling

Probability sampling methods are those that clearly specify the probability or likelihood of
inclusion of each element or individual in the sample. Probability sampling is free of bias in
selecting sample units. They help in estimation of sampling errors and evaluate sample results
in terms of their precision, accuracy and efficiency and hence, the conclusions reached from
such samples are worth generalisation and comparable to similar population to which they
belong. Probability sampling has three important conditions: 1. The exact size of the population
must be known and it must be possible to list all of the individuals. 2. Each individual in the
population must have a specified probability of selection. 3. When a group of individuals are all
assigned the same probability, the selection process must be unbiased so that all group members
have an equal chance of being selected. Selection must be a random process, which simply
means that every possible outcome is equally likely. Major probability sampling methods are:

9.3.2.1 Simple random sampling

The starting point for most probability sampling techniques is simple random sampling.
The basic requirement for random sampling is that each individual in the population has an
equal chance of being selected. A simple random sample requires (a) a complete listing of all
the elements (b) an equal chance for each elements to be selected (c) a selection process
whereby the selection of one element has no effect on the chance of selecting another element.
115

For example, if we are to select a sample of 10 students from the seventh grade consisting of
40 students, we can write the names (or roll number) of each of the 40 students on separate
slips of paper – all equal in size and colour – and fold them in a similar way. Subsequently, they
may be placed in a box and reshuffled thoroughly. A blindfolded person, then, may be asked to
pick up one slip. Here, the probability of each slip being selected is 1/40. Suppose that after
selecting the slip and noting the name written on the slip, he again returns it to the box. In this
case, the probability of the second slip being selected is again 1/40. But if he does not return
the first slip to the box, the probability of the second slip becomes 1/39. When an element of the
population is returned to the population after being selected, it is called sampling with replacement
and when it is not returned, it is called sampling without replacement. Thus random sampling
may be defined as one in which all possible combinations of samples of fixed size have an
equal probability of being selected. Advantages of simple random sampling are: 1) Each person
has equal chance as any other of being selected in the sample. 2) Simple random sampling
serves as a foundation against which other methods are sometimes evaluated. 3) It is most
suitable where population is relatively small and where sampling frame is complete and up-to-
date. 4) As the sample size increases, it becomes more representative of universe. 5) This
method is least costly and easily assessable of accuracy. Despite these advantages, some of
the disadvantages are: 1) Complete and up-to-date catalogued universe is necessary. 2) Large
sample size is required to establish the reliability. 3) When the geographical dispersion is so
wider therefore study of sample item has larger cost and greater time. 4) Unskilled and untrained
investigator may cause wrong results.

Self learning exercise

How would you select a sample of 100 college students from the city colleges in Chennai
using simple random sampling method?

9.3.2.2 Systematic Sampling

Systematic sampling is a type of probability sampling that is very similar to simple random
sampling. Systematic sampling begins by listing all the individuals in the population, then randomly
picking a starting point on the list. The sample is then obtained by moving down the list, selecting
every nth name. Note that systematic sampling is identical to simple random sampling (i.e.,
follow the three steps) for selection of the first participant; however, after the first individual is
selected, the researcher does not continue to use a random process to select the remaining
individuals for the sample. Instead, the researcher systematically selects every nth name on
116

the list following the first selection. The size of n is calculated by dividing the population size by
the desired sample size. For example, suppose a researcher has a population of 100 third-
grade students and would like to select a sample of 25 children. Each child’s name is put on a
list and assigned a number from 1 to 100. Then, the researcher uses a random process such as
a table of random numbers to select the first participant; for example, participant number 11.
The size of n in this example is 4 (100/25). Therefore, every fourth individual after participant 11
(15, 19, 23, and so on) is selected. This technique is truly less random than simple random
sampling because the principle of independence is violated. Specifically, if we select participant
number 11, we are biased against choosing participants number 12, 13, and 14, and we are
biased in favor of choosing participant number 15. However, as a probability sampling method,
this method ensures a high degree of representativeness.

9.3.2.3 Stratified Random Sampling

In stratified random sampling the population is divided into two or more strata, which may
be based upon a single criterion such as sex, yielding two strata-male and female, or upon a
combination of two or more criteria such as sex and graduation, yielding four strata, namely,
male undergraduates, male graduates, female undergraduates and female graduates. These
divided populations are called subpopulations, which are non-overlapping and together constitute
the whole population. Having divided the population into two or more strata, which are considered
to be homogeneous internally, a simple random sample for the desired number is taken from
each population stratum. Thus, in stratified random sampling the stratification of population is
the first requirement. There can be many reasons for stratification in a population. Two of them
are: 1) Stratification tends to increases the precision in estimating the attributes of the whole
population. 2) Stratification gives some convenience in sampling. When the population is divided
into several units, a person or group of person may be deputed to supervise the sampling
survey in each unit. For example, suppose that we plan to select 50 individuals from a large
introductory psychology class and want to ensure that men and women are equally represented.
First, we select a random sample of 25 men from the males in the class and then a random
sample of 25 women from the females. Combining these two subgroup samples produces the
desired stratified random sample. Stratified random sampling is particularly useful when a
researcher wants to describe each individual segment of the population or wants to compare
segments. To do this, each subgroup in the sample must contain enough individuals to adequately
represent its segment of the population.
117

Advantages of stratified Random Sampling are: 1) Stratified sampling is more


representative of the population because formation of stratum and random selection of item
from each stratum make it hard to exclude in strata of the universe and increases the sample’s
representation to the population or universe.2) It is more precise and avoids the bias to great
extent. 3) It saves time and cost of data collection since the sample size can be less in the
method. Despite these advantages, some of the disadvantages of stratified sampling are: 1)
Improper stratification may cause wrong results. 2) Greater geographical concentration may
result in heavy cost and more time. 3) Trained investigators are required for stratification.

9.3.2.4 Cluster sampling

All of the sampling techniques we have considered so far are based on selecting individual
participants, one at a time, from the population. Occasionally, however, the individuals in the
population are already clustered in preexisting groups, and a researcher can randomly select
groups instead of selecting individuals. For example, a researcher may want to obtain a large
sample of third-grade students from the city school system. Instead of selecting 300 students
one at a time, the researcher can randomly select 10 classrooms (each with about 30 students)
and still end up with 300 individuals in the sample. This procedure is called cluster sampling
and can be used whenever well-defined clusters exist within the population of interest. This
sampling technique has two clear advantages. First, it is a relatively quick and easy way to
obtain a large sample. Second, the measurement of individuals can often be done in groups,
which can greatly facilitate the entire research project. Instead of selecting an individual and
measuring a single score, the researcher can often test and measure the entire cluster at one
time, and walk away with 30 scores from a single experimental session. The disadvantage of
cluster sampling is that it can raise concerns about the independence of the individual scores.
A sample of 300 individuals is assumed to contain 300 separate, individual, and independent
measurements. However, if one individual in the sample directly influences the score of another
individual, then the two scores are, in fact, related and should not be counted as two separate
individuals. If the individuals within a cluster share common characteristics that might influence
the variables being measured, then a researcher must question whether the individual
measurements from the cluster actually represent separate and independent individuals. Despite
these merits, this sampling method is less accurate than a sample, containing the same number
of the units in single stage samples.
118

9.3.2.5 Proportionate Stratified Random Sampling

Occasionally, researchers try to improve the correspondence between a sample and a


population by deliberately ensuring that the composition of the sample matches the composition
of the population. As with a stratified sample, we begin by identifying a set of subgroups or
segments in the population. Next, we determine what proportion of the population corresponds
to each subgroup. Finally, a sample is obtained such that the proportions in the sample exactly
match the proportions in the overall population. This kind of sampling is called proportionate
stratified random sampling, or simply proportionate random sampling. For example, suppose
that we want our sample to accurately represent gender in the population. If the overall population
contains 75% females and 25% males, then the sample is selected so that it, too, contains 75%
females and 25% males. First, determine the desired size of the sample, then randomly select
from the females in the population until you have a number corresponding to 75% of the sample
size. Finally, randomly select from the males in the population to obtain the other 25% of the
sample. Proportionate random sampling is used commonly for political polls and other major
public opinion surveys in which researchers want to ensure that a relatively small sample provides
an accurate, representative cross-section of a large and diverse population. The sample can be
constructed so that several variables such as age, economic status, and political affiliation are
represented in the sample in the same proportions in which they exist in the population. This
process requires a lot of preliminary measurement before the study actually begins, and it
discards many of the sampled individuals. In addition, a proportionate stratified sample can
make it impossible for a researcher to describe or compare some subgroups or strata that exist
within the population.

Self learning exercise

Write an example for each method of probability sampling technique.

9.4 IMPORTANCE OF SAMPLING

Recent developments in sampling technique have made this method more reliable and
valid. The results of sampling have attained a sufficiently high standard of accuracy. The three
main advantage of sampling are that cost in lowest, data collection is faster, and since the data
set is smaller, it is possible to ensure homogeneity and to improve the accuracy and quality of
data (Ader, Mellenbergh & Hard (2008)
119

Planning a sampling strategy:

There are several steps in planning the sampling strategy:

1. Decide whether you need a sample, or whether it is possible to have the whole
population.

2. Identify the population, its important features (the sampling frame) and its size.

3. Identify the kind of sampling strategy you require (e.g. which variant of probability
and non-probability sample you require).

4. Ensure that access to the sample is guaranteed. If not, be prepared to modify the
sampling strategy (step 2).

5. For probability sampling, identify the confidence level and confidence intervals that
you require. For non-probability sampling, identify the people whom you require in
the sample.

6. Calculate the numbers required in the sample, allowing for non-response, incomplete
or spoiled responses, attrition and sample mortality, i.e. build in redundancy.

7. Decide how to gain and manage access and contact (e.g. advertisement, letter,
telephone, email, personal visit, personal contacts/friends).

8. Be prepared to weight (adjust) the data, once collected.

9.5 SAMPLE SIZE

The question of sample size must always be addressed in developing any survey. How
many respondents must we sample to arrive at a reasonable estimate of population values?

Before we can answer this question, however, we must answer a series of other questions,
and once these issues are decided on, the sample size more or less defines itself. As in all
other areas of survey sampling, the decision regarding the size of the sample involves trade-
offs, most of which concern the complementary issues of cost and precision. When the underlying
population is large, precise sample results can be obtained even when the sampling fraction is
very small. What matters most is the absolute size of the sample, rather than the size of the
sample relative to the size of the population. The first decision that must be made in determining
the size of the survey sample concerns the amount of tolerable error. The less precision required
of the results, the smaller the sample needed. Thus, if we wish to obtain extremely precise
120

findings (i.e., results that will estimate underlying population values with a high degree of
accuracy) it must be understood that we will need to sample more respondents. Cost and
precision go hand in hand. Suppose we wish to estimate the proportion of the population that
possesses a specific trait or characteristic, or holds a given belief or value. Having made the
decision regarding the degree of precision necessary for our purposes, we must make a rough
estimate of the proportion of the population that possess the trait or belief.

The following formula is used for calculating the precise number of people needed for a
finite rather than an infinite population where n is the size of the sample and N is the size of the
finite population (Berenson, Levine and Krehbiel, 2009):

adjusted n =(n x N)/(n+(N-1))

We can see this if we substitute increasingly large finite populations in this formula while
the sample size remains at 346. When carrying out a study we also need to take account of the
response rate or the number of people who will take part in the study. It is unlikely that we will be
able to contact everyone or that everyone we contact will agree to participate. If the response
rate is, say, 70 per cent, then 30 per cent will not take part in the study. Thus, we have to
increase our sample size to 495 people (346/.70 = 494.29). A 70 per cent response rate for a
sample of 495 is 346 (.70 × 495 = 346.50). Often response rates are much lower than this.

Power

A test is said to have high power if it results in a high probability that a difference that
exists in reality will be found in a particular study. Power is affected by the alpha level (e.g.,
á=.05), by the size of the treatment effect (effect size), and, especially, by the size of the sample
used in the experiment. This latter attribute is directly under the experimenter’s control, and
researchers sometimes perform a ‘‘power analysis’’ at the outset of a study to help them make
decisions about the best sample size for their study (consult a statistics text for more details on
completing a power analysis). Students are often frustrated when their study ‘‘doesn’t come
out’’ (i.e., no significant differences occur), an outcome that often results from a small sample
size.

9.6 SUMMARY

The goal of the research study is to measure a sample and then generalize the results to
the population. Therefore, the researcher should be careful to select a sample that is
121

representative of the population. This chapter examines two basic categories of sampling
techniques which are probability and nonprobability sampling. In probability sampling, the odds
of selecting a particular individual are known and can be calculated. Types of probability sampling
are simple random sampling, systematic sampling, stratified sampling, proportionate stratified
sampling, and cluster sampling. In nonprobability sampling, the probability of selecting a particular
individual is not known because the researcher does not know the population size or the members
of the population. Types of nonprobability sampling are convenience, quota, purposive and
snowball sampling. Each sampling method has advantages and limitations, and differs in terms
of the representativeness of the sample obtained.

9.7 KEYWORDS

Sample: A statistical sample is miniature picture of cross selection of the entire group or
aggregate from which the sample is taken

Population: Population refers to the whole that include all observations or measurements
of a given characteristic.

Non-probability sampling: Non probability sampling is one in which the no probability is


considered of the element or group of elements, of population being included in the sample.

Convenience sampling: People are selected on the basis of their availability and
willingness to respond.

Quota sampling: Different subgroups are represented equally, quota sampling can ensure
that subgroups are equally represented in a convenience sample.

Purposive sampling: If the investigator is interested in a particular type of person,


somebody with special expertise, the investigator may try to find as many such people as
possible and study them.

Snowball sampling: It begins by the collection of data on one or more contacts usually
known to the person collecting the data.

Probability sampling: Probability sampling methods are those that clearly specify the
probability or likelihood of inclusion of each element or individual in the sample.
122

Simple random sampling: This sampling may be defined as one in which all possible
combinations of samples of fixed size have an equal probability of being selected.

Systematic sampling: Systematic sampling begins by listing all the individuals in the
population, then randomly picking a starting point on the list.

Stratified random sampling: A simple random sample for the desired number is taken
from each population stratum.

Cluster sampling: If the population are already clustered in preexisting groups, and a
researcher can randomly select groups instead of selecting individuals.

9.8 CHECK YOUR PROGRESS


1. _______ sampling method has a equal chance for all individuals in the Universe to
participate in the study.

2. Convenience sampling is the most effective method of choosing sample (True/


False)

3. Systematic and __________ sampling methods are similar to an extent in choosing


the sample.

4. Sample size is already fixed for any research study. (True/False)

9.9 ANSWERS TO CHECK YOUR PROGRESS

1. Probability sampling

2. False

3. Simple random

4. False

9.10 MODEL QUESTIONS


1. Differentiate between probability and non probability sampling techniques.

2. State the importance of sampling techniques.

3. How do you determine the sample size?

4. State the advantages and disadvantages of various sampling techniques.


123

REFERENCES

Beins, B. C., & McCarthy, M. A. (2012). Research Methods and Statistics. New Delhi,
India: Pearson Education Inc.

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. London,
England: Routledge.

Cozby, P. C., & Bates, S. C. (2015). Methods in Behavioural Research (12th ed). New
York, NY: McGraw Hill Education.

Crano, W. D., & Brewer, M. B. (2002). Principles and Methods of Social Research. Mahwah,
NJ: Lawrence Erlbaum Associates Publishers.

Evans, A. N. & Rooney, B. J. (2011). Methods in Psychological Research (2nd ed). New
Delhi, India: Sage Publication.

Goodwin, J. C. (2010). Research in Psychology: Methods and Design. (6th ed). Hoboken,
NJ: John Wiley & Sons.

Gravetter, F. J., & Forzano, L-A. B. (2012). Research Methods for the Behavioural Sciences
(4th ed). Belmont, CA: Wadsworth Cengage Learning.Howitt, D., & Gramer, D. (2011).
Introduction to Research Methods in Psychology. Harlow, Essex: Pearson Education Inc.

Howitt, D., & Cramer, D. (2011). Introduction to Research Methods in Psychology. England:
Pearson Education Ltd.

Leary, M. R. (2001). Introduction to Behavioural Research (3rd ed). Needham Heights,


MA: Allyn and Bacon.

Lovely Professional University. (2012). Research Methodology. Retrieved from: http://


ebooks.lpude.in/management/mba/term_2/DCOM408_DMGT404_RESEARCH_MET
HODOLOGY.pdf

Mcburney, D. H., & White, T. L. (2006). Research Methods (7th ed). Belmont, CA:
Wadsworth Cengage Learning.
124

Neuman, W. L. (2014). Social Research Methods: Qualitative and Quantitative Approaches.


New York, NY: Pearson Education Ltd.

Pandey, P. & Pandey, M.M. (2015). Research Methodology: Tools and Techniques. Buzau,
Al. Marghiloman.

Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2012). Research Methods


in Psychology (9th ed). New York, NY: McGraw Hill Education.

Walliman, N. (2011). Research Methods: The Basics. Oxon: Routledge


125

LESSON - 10
QUALITATIVE METHODS OF DATA COLLECTION
INTRODUCTION

After knowing about the qualitative and quantitative researches in the earlier chapters,
we need to collect data in order to describe, understand and predict. Both qualitative and
quantitative research involves the step to collect data. Based on the research question as well
as the feasibility of the research one can precede with data collection. In earlier lesson we
discussed about the different methods of collecting data in a quantitative research. This lesson
will give an overview of various methods of data collection for a qualitative data. Let us look into
few of qualitative methods of data collection.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand what is qualitative method of data collection

 To explain the use of different methods of data collection in qualitative research

 To understand the where to use these methods of collecting data

PLAN OF THE STUDY


10.1 Qualitative Data Collection

10.2 Life Histories

10.3 Documents

10.4 Diaries

10.5 Photographs

10.6 Film and Videos

10.7 Conversation Analysis

10.8 Text

10.9 Summary

10.10 Key Words

10.11 Check your Progress


126

10.12 Answers to Check your Progress

10.13 Model Questions

10.1 QUALITATIVE DATA COLLECTION

Qualitative data collection methods are used to improve the quality of survey based
quantitative methods of data collection. It helps to generate hypothesis and to substantiate the
findings of quantitative information. Qualitative methods has few characteristics

1. It is mostly open ended and have less structure in getting the responses.

2. The data are collected using an interactive interviews or focus group discussion.

3. Generally their findings are not generalizable to any specific population.

4. It takes more time to collect data.

5. The data are recorded in a systematic way either through audio or video or notes or
any other suitable means.

Qualitative data collection method includes participant observation, focus group discussion,
interviews, Life history and oral history, Documents, Diaries, Photographs, Films and videos,
Conversation, Texts and Case studies. However, in Research Methodology I, we have looked
into the details of participant observation, focus group, interview and case study. There are
certain other important methods of data collection such as Life history and oral history,
Documents, Diaries, Photographs, Films and videos, Conversation and Texts which we shall
see in this lesson.

10.2 LIFE HISTORIES

Life histories seek to “examine and analyse the subjective experience of individuals and
their constructions of the social world” (Jones, 1983, p. 147). They assume a complex interaction
between the individual’s understanding of his or her world and that world itself. Thus, one
understands a culture through the history of one person’s development or life within it, a history
told in ways that capture the person’s feelings, views, and perspectives. The life history is often
an account of how an individual enters a group and becomes socialized into it. That history
includes the learning to meet the normative expectations of that society by gender, social class,
or age peers. Life histories emphasize the experience of the individual—how the person copes
with society rather than how society copes with the stream of individuals (Mandelbaum, 1973).
127

Life histories can focus on critical or fateful moments. Indecision, confusion, contradiction, and
irony are captured as nuanced processes in a life (Sparks, 1994). These histories are particularly
helpful in defining socialization and in studying aspects of acculturation and socialization in
institutions and professions. Their value goes beyond providing specific information about events
and customs of the past— as a historical account might—by showing how the individual creates
meaning within the culture. Life histories are valuable in studying cultural changes that have
occurred over time, in learning about cultural norms and transgressions of those norms, and in
gaining an inside view of a culture. They also help capture how cultural patterns evolve and how
they are linked to the life of an individual.

The term life history is sometimes used when, in fact, in-depth interviews are more focused
on respondents’ evolution or development over time. There are few strengths of this method.
The first strength of life history methodology is that, because it pictures a substantial portion of
a person’s life, the reader can enter into those experiences. The second is that it provides a
fertile source of testable hypotheses, useful for focusing subsequent studies. The third strength
is that it depicts actions and perspectives across a social group that may be analyzed for
comparative study. Life history as a methodology emphasizes the value of a person’s story and
provides pieces for a mosaic depicting an era or social group. This kind of research requires
sensitivity, caring, and empathy by the researcher for the researched (Cole & Knowles, 2001).

Jones (1983) offers five criteria for life histories. First, the individual should be viewed as
a member of a culture; the life history “describe[s] and interpret[s] the actor’s account of his or
her development in the common-sense world.” Second, the method should capture the significant
role that others play in “transmitting socially defined stocks of knowledge.” Third, the assumptions
of the cultural world under study should be described and analyzed as they are revealed in
rules and codes for conduct as well as in myths and rituals. Fourth, life histories should focus on
the experience of an individual over time so that the “processual development of the person”
can be captured (pp. 153–154). And fifth, the cultural world under study should be continuously
related to the individual’s unfolding life story.

The major criticisms of the life history are that it makes generalizing difficult, offers only
limited principles for selecting participants, and is guided by few accepted concepts of analysis.
Once the researcher acknowledges the possible weaknesses in the method, however, he can
circumvent them. Official records may provide corroborating information or may illuminate aspects
of a culture absent from an individual’s account. The researcher can substantiate meanings
128

presented in a history by interviewing others in a participant’s life. A life history account can add
depth and evocative illustration to any qualitative study.

10.3 DOCUMENTS

A document is a text-based file that may include primary data (collected by the researcher)
or secondary data (collected and archived or published by others) as well as photographs,
charts, and other visual materials. The documents may be internal to a program or organization
(such as records of what components of an asthma management program were implemented
in schools) or may be external (such as records of emergency room visits by students served
by an asthma management program). Documents may be hard copy or electronic and may
include reports, program logs, performance ratings, funding proposals, meeting minutes,
newsletters, and marketing materials.

Documents constitute the basis for most qualitative research. Primary data documents
(PDDs) include transcriptions of interviews; participant observation field notes; photographs of
field situations taken by the researcher as records of specific activities, rituals, and personas
(with associated locational and descriptive data); and maps and diagrams drawn by the researcher
or by field assistants or participants in a study (with accompanying explanations). These
documents are filed systematically so that they can be readily recovered for classification,
coding, and analysis. Secondary data documents (SDDs) are materials that are important in
describing the historical background and current situation in a community or country where the
research is being conducted. They include maps, demographic data, measures of disparity in
health or educational status (records of differences in types of surgery, disease distribution,
graduation rates, etc.), and de-identified quantitative databases that include variables of interest
to the researcher. Some forms of research, such as studies based on spatial data, rely primarily
on SDDs or secondary databases, which must then be integrated and overlaid in geographic
information system (GIS) software to display hypothesized differences in the distribution of
variables in space.

Historical research also depends heavily or entirely on SDDs. Other types of qualitative
studies, however, do not depend solely on SDDs. Certain types of SDDs can be very helpful at
the start of a study. For example, obtaining well-drawn or digitized maps of a study community
early in a study can assist in the development of study samples and can provide the basis for
orienting the researcher in space. Censuses, other national surveys, and/or local educational
or health databases can be used to explore hypothetical linkages among study domains prior to
129

the collection of additional qualitative data and can provide guidance in formulating in-depth
interview questions. Archived photographs can be important in illustrating changes in built
environment or life conditions. Historical documents and photographs may be properly archived
in libraries or museums or may be stored in basements or other “unofficial” places, and both
personal and professional relationships may be required to access them. Unlike other types of
SDDs, these materials may be considered as important cultural capital, and care must be taken
in negotiating how they are represented to the public.

10.4 DIARIES

In academic research, diary writing is beneficial in eliciting personal yet structured


responses. Diaries have been used in the academic realm to study a large spectrum of human
activities, including but not limited to sexual and dating practices, sleep habits, exercise routines,
television viewing, social activities, food consumption, educational pursuits, eating behaviors,
work interactions, internet habits, leisure activities, cell phone use, travel routines, menstrual
and fertility cycles, and a wide range of physical and mental health events. Diaries are particularly
appropriate in recording routine or everyday processes that are otherwise unnoticed if not
documented. Many qualitative studies use diary analysis to observe, improve, or enhance
people’s practices by tracking their patterns and cycles. Checklists are often used with formats
resembling survey and questionnaire techniques. Such diaries can assist health care
professionals in diagnosing patients’ symptoms, adjusting medication type or dosage, and
ensuring compliance with prescribed medical protocols. Regardless of the discipline in which
they are used, diaries can provide researchers with enlarged and detailed “snapshots” of what
people have experienced.

Although diary formats vary, usually they do not offer open-ended questions but rather
supply participants with a specific set of fixed responses. These optional answers can be in a
dichotomous (yes/no), scaled, or multiple-choice format. Likewise, diaries can be constituted in
the form of logs, ledgers, or calendars. When analyzing diaries, researchers have a variety of
options. Diaries lend themselves to mixed methods while also offering rich subjective data. If
researchers include open-ended questions in diaries, participant responses are usually coded
thematically with an eye toward emerging themes and subthemes.
130

10.5 PHOTOGRAPHS

Photographs, along with other visual representations such as drawings, cartoons, videos,
and even color swatches, play a variety of roles in qualitative research because they offer a
visual medium in addition to the more common verbal medium. They complement the spoken
word and often enable a richer, more holistic understanding of research participants’ worlds as
well as often act as stimuli, for example, in the development of advertising, packaging, brand
development, and corporate imagery. Broadly, photographs can have a role in two aspects of
research. They can be a form of data gathered from research participants and initiated either by
the researcher or by the research participants. Alternatively, they can be used as a stimulus that
is provided by the researcher to act as a prompt or as a focus of discussion.

Photographs have an important role in broadening the researcher’s understanding of


research participants’ lives outside the research context. Many research situations, for example,
focus groups and in-depth interviews, while invaluable forums for gathering research data are
necessarily artificial because researchers are taking people out of their normal context. It is
therefore, useful to develop means of capturing data in a real-life situation and to supplement
data gathered from structured research. Asking participants to carry out a specified task before
the research session, for instance, serves two purposes. It sensitizes them to the topic to be
discussed and it enables them to capture some aspects of their worlds that they can bring into
the research situation to be examined and that the researcher can retain and incorporate into
the analysis and research findings. There is a danger with photographs, as with all visual data,
that they can be seen as self-explanatory, especially as they are often visually very powerful.
However, photographs are a primary source of data that offers the potential to gain insights that
are not accessible through interview methods, and they need analysis and interpretation as
researchers would do with verbal data. This is not to say that photographs cannot have a
secondary role of “bringing the consumer to life” in a subsequent client presentation, but treating
this function as their primary role undersells the potential of these data. The researcher can
also act as photographer, using the photographs as a complementary form of data to the
interviewing itself. This task may involve taking photographs of participants (with their permission
and having explained exactly how the photos will be used), photographing their home or work
environments, their possessions, their family, and so on, depending on the nature of the project.
These photos may be shown to the research participant in a research situation and the meaning
of the content explored, and/or they may be used by the researcher to examine differences
between participants so that generalized themes may be drawn out. In addition, they can be
131

very useful in helping clients understand the lives and priorities of their target audiences.
Increasingly, videos are used in these situations instead of still photographs, although physically
holding a photograph, which represents a frozen moment in time, can be very effective in
allowing participants or the researcher to reflect on the meaning of the action or setting without
the pressure to move on to the next scene.

The familiar phrase, “A picture speaks a thousand words” is very apt when applied to
certain areas that need to be explored qualitatively, such as imagery, emotional meaning, and
brand identity that, to a large extent, depend on visual understanding. Photographs and other
pictorial stimuli operate at a visual level—a level that is very important for emotional content—
and the form in which, consciously or unconsciously, such content is often stored. Photographs
work because they are in some ways closer than words to the language of emotions. Just as
photographs acting as data can be produced by either the researcher or the research participants,
so photographs acting as stimuli can be pre-prepared and introduced by the researcher into the
research situation; in addition, photographs can be used by participants to express emotions
and to develop concepts or directions related to the specific project.

10.6 FILM AND VIDEOS

Film and video are used in qualitative research as data collection tools, as sources of
information and dialogue between researchers and participants, and as mechanisms for
disseminating research results. The 20th century was the century of film; the 21st is the century
of digital video. The 20th saw major innovations in recording and filmmaking, many applicable
to ethnography. But owing to characteristics of the technology itself, visual approaches never
became a prominent feature of qualitative research. A methodology may be viewed as the
application of a technology to some feature of the world, producing the traces that serve as a
basis for analysis. Current video technology offers a spectacular methodological promise, making
it the first choice for ethnographers of the future. Video is a more robust and transparent data
collection technology. As a reflexive prompt, it can help individuals or groups provide richer
data; and in the hands of subjects, it expands the scope of inquiries, while substantially minimizing
the interviewer effect. Moreover, as a presentation medium, it can be edited to reach both
specialist and lay audiences. This entry begins with a brief history of visual ethnography, followed
by a discussion of the role of technology. It then reviews differences between video for data
collection and presentation, provides a critique of past ethnographic techniques, and ends with
a distinction between documentary and academic films.
132

10.7 CONVERSATION ANALYSIS

Conversation analysis (CA) has become the established label for a quite specific approach
to the analysis of interaction that emerged during the 1960s. Its basic interest was sociological—
understanding social order. CA research is essentially a data-driven endeavour, so it starts with
the collection of data. Researchers in CA work on audio- or video recordings of interactions that
are “naturally occurring,” meaning that they are not arranged or provoked by the researcher as
in experiments or interviews. For CA as such—”pure CA”—there are in principle, and often in
practice, no further requirements or limitations, although for specialized forms of “applied CA” it
makes sense to collect recordings of specific types of situations. These recordings are then
carefully transcribed using a set of conventions developed by Gail Jefferson. Apart from the
words-as-spoken, these conventions allow the researcher to highlight a range of “production
detail” concerning timing, intonation, and the like that have been proven to be important for the
organization of the interaction. Listening to the recording and reading the transcript, the analyst
tries to understand what the interactants are doing “organizationally” when they speak as they
do. They may, for instance, be requesting information, offering to tell a story, or changing the
topic. Such understandings will be based, first, on the researcher’s membership knowledge as,
one might say, a “cultural colleague” of the speakers. Second, however, the analyst will check
the sequential context and especially the uptake of the utterances in question in subsequent
talk immediately following (e.g., by granting a request) or later in the conversation. However,
understanding the actions is not the purpose of the research but rather a necessary requirement
for the next step, which is to formulate the procedures used to accomplish the actions-as-
understood.

CA’s interest is organizational and procedural. It is often recommended that the researcher
approach data with an open mind, that is, without pre-formulated interests, questions, or
hypotheses (except the general organizational and procedural orientation). The idea is that
inspecting the data in this way will raise an interest in the researcher’s mind that can be used as
a starting point for a more systematic exploration of an emerging analytic theme. The researcher
searches the available data for instances that seem to be similar to the “candidate phenomenon”
that inspired the first formulation of the theme as well as data that seem to point in a different
direction—the so-called deviant case analysis. It may also be useful to collect new data to
expand the analysis. In short, the researcher builds a collection of relevant cases in search of
patterns that help to elucidate some procedural issues.
133

10.8 TEXT

Text, which in its broadest sense is anything in written form, constitutes the basic medium
through which most qualitative analysis is carried out. Texts for research purposes are generated
in many different ways; some are naturally occurring (e.g., newspaper reports, minutes of
meetings, or policy documents); some are created following the use of research methods such
as semi-structured interviews or focus groups (through audio recording and transcription) or
produced by the researcher (such as field notes within participant observation); and others are
the consequence of a process of “translation” whereby a social phenomenon that is the object
of study is turned into text.

The epistemological status of a text is contingent on the set of assumptions and tenets
underpinning the research endeavour for which it has been generated. In considering a
transcription of a research interview, for example, researchers working within a critical realist
paradigm (e.g., perhaps using grounded theory principles) might accept the transcript as a
reflection of the research participant’s perspectives or views. Those adopting a narrative
methodology will view the product of the interview as a story and will be interested in sequencing
and form. Researchers of postmodern persuasion (e.g., discourse analysts) will treat the text
as constitutive in its own right and reject it as a neutral representation of research participants’
cognitive processes (such as their beliefs or attitudes). The processes whereby the social
phenomena that are the objects of study are transformed into text are also largely determined
by the assumptions underpinning what the text represents. If one views text as the means of
transmitting the spoken word, which itself is a transparent representation of views, beliefs, or
experiences, then the process of transcribing audio recorded material is largely a technical
issue in terms of ensuring that an interview is audible and thus captured on a recording and
then accurately word-processed.

A text prepared for a conversation analysis appears very detailed and may be virtually
incomprehensible to a novice researcher in comparison to a transcription of an interview
generated for a grounded theory study. Traditionally, the process and findings of research have
been represented as scholarly texts such as academic papers, books, or theses. However, the
critical turn within social research means that more researchers are exploring different ways of
presenting their research products, such as alternative textual forms (poems, stories, web-
based material), visual forms (such as photographs and pictures), or performative media (such
as drama).
134

10.9 SUMMARY

Qualitative data collection methods are used to improve the quality of survey based
quantitative methods of data collection. It helps to generate hypothesis and to substantiate the
findings of quantitative information. There are certain other important methods of data collection
such as Life history and oral history, Documents, Diaries, Photographs, Films and videos,
Conversation and Texts. Based on the need, availability, accessibility and feasibility appropriate
data collection method can be used. One need to be well verse in qualitative research in order
to process, collect and analyse the data into a meaningful inference.

10.10 KEY WORDS

Life Histories: Life histories helps one understand a culture through the history of one
person’s development or life within it, a history told in ways that capture the person’s feelings,
views, and perspectives.

Document: A document is a text-based file that may include primary data (collected by
the researcher) or secondary data (collected and archived or published by others) as well as
photographs, charts, and other visual materials.

Diaries: Diaries have been used in the academic realm to study a large spectrum of
human activities.

Photographs: Photographs, offer a visual medium in addition to the more common verbal
medium.

Conversation analysis: CA work on audio- or video recordings of interactions that are


“naturally occurring,” meaning that they are not arranged or provoked by the researcher as in
experiments or interviews.

Text: Text, which in its broadest sense is anything in written form, constitutes the basic
medium through which most qualitative analysis is carried out.

10.11 CHECK YOUR PROGRESS

1. ___________ method helps in understanding the acculturation and socialization of


institutions and professions.
135

2. ________ method of data helps to develop means of capturing data in a real-life situation
and to supplement data gathered from structured research.

3. _________ documents are materials that are important in describing the historical
background and current situation in a community

10.12 ANSWERS TO CHECK YOUR PROGRESS

1. Life histories

2. Photographs

3. Secondary Data documents

10.13 MODEL QUESTIONS

1. Write a detailed history of various methods of data collection in qualitative research.

2. Explain how each method is unique in collecting qualitative information.

REFERENCES

Atkinson, R. (1998). The life story interview. Thousand Oaks, CA: Sage.

Chessman, C. (1954). Cell 2455 death row. Englewood Cliffs, NJ: Prentice Hall.

Gluck, S. B., & Patai, P. (Eds.). (1991). Women’s words: The feminist practice of oral
history. New York: Routledge.

Mandelbaum, D. G. (1973). The study of life history: Gandhi. Current Anthropology, 14,
177–207. Martin, R. R. (1995). Oral history in social work: Research, assessment, and
intervention. Thousand Oaks, CA: Sage.

Miller, R. L. (1999). Researching life stories and family histories. Thousand Oaks, CA:
Sage.

Slim, H., & Thompson, P. (1995). Listening for a change: Oral testimony and community
development. Philadelphia: New Society Publishers.
136

Thompson, P. R. (2000). The voice of the past: Oral history (3rd ed.). Oxford, UK: Oxford
University Press. Yow, V. R. (1994). Recording oral history: A practical guide for social scientists.
Thousand Oaks, CA: Sage.

Given, L.M. (2008). In Eds. The SAGE Encyclopedia of Qualitative Research Methods.
New Delhi: Sage Publications India Pvt. Ltd.

CDC. Department of Health and Human services. (2009). Data Collection Methods for
Evaluation: Document. Centre for Disease Control and Prevention. Retrieved from https://
www.cdc.gov/healthyyouth/evaluation/pdf/brief18.pdf
137

LESSON - 11
PARAMETRIC STATISTICS
INTRODUCTION

After collecting the data using different sampling technique, the researcher will be left
with data and looking for a way to comprehend it. This huge number of data needs to be
handled carefully without any misinterpretation. Quantitative data requires rigorous process
before communicating the output as the number is going to converse for sample chosen. The
data can be analysed using many statistical tools. Before looking into the statistical analysis we
need to understand the difference between parametric and non-parametric statistics.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand what is difference between parametric and non parametric statistics

 To explain the utility of different types of parametric tests and their function.

PLAN OF THE STUDY


11.1 Statistics

11.2 Parametric Statistics

11.2.1 One Way Anova

11.2.2 Two Way Anova

11.2.3 ‘t’ test:

11.2.3.1 Paired t test (t-test for dependent groups,


correlated t test) 

11.2.3.2 ‘t’ test for Independent Samples (with two options)

11.3 Pearson Product Moment Correlation

11.4 Regression

11.5 Summary

11.6 Key Words

11.7 Check your Progress


138

11.8 Answers to Check your Progress

11.9 Model Questions

11.1 STATISTICS

The two major types of statistical methods which are parametric and non-parametric
statistics. Before understanding what is parameter we shall look into the difference between
population and sample. A population is a group of phenomena that possesses some common
features.  A sample is a smaller group of members of a population selected to represent the
population. A parameter is a characteristic of a population. A statistic is a characteristic of a
sample. Inferential statistics helps us to predict about the population parameter using statistics
applied to the sample drawn from the population. A parameter reveals that most populations
display a large number of more or less ‘average’ cases with extreme cases tailing off at each
end. Calculations of parametric statistics are based on this feature. Not all data are parametric,
i.e. populations sometimes do not behave in the form of a normal probability curve. Data
measured by nominal and ordinal methods will not be organized in a curve form. Nominal data
tend to be in groups, while ordinal data can be displayed in the form of a set of steps (e.g. the
first, second and third positions on a competition). For those cases where this parameter is
absent, nonparametric statistics may be applicable. Non-parametric statistical tests have been
devised to recognize the particular characteristics of non-curve data and to these categorical
variables. Parametric tests are more power efficient. However, the data is expected to be
homogeneity in nature and also normally distributed.

11.2 PARAMETRIC STATISTICS

Parametric tests are designed to represent the wide population, e.g. of a country or age
group. They make assumptions about the wider population and the characteristics of that wider
population. There are two classifications in parametric statistical tests: descriptive and inferential.
Descriptive tests will reveal the description and details of the data and about the normality of
distribution. Inferential tests will help infer the results from a sample in relation to a population.
There are three types of analysis namely univariate, bivariate and multivariate analysis. Univariate
are descriptive analysis, bivariate are used to relationship between two variables and multivariate
are used to establish relationships between more than two variables. Now let us look into few
parametric statistical analyses in this lesson and few non-parametric statistics in the next lesson.
139

11.2.1 ONE WAY ANOVA

The one-way analysis of variance (ANOVA) is used to find out whether there are any
statistically significant differences between the means of two or more independent (unrelated)
groups (although you tend to only see it used when there are a minimum of three, rather than
two groups). For example, one-way ANOVA is used to identify the religious commitment among
three or four religious groups. However, this analysis will just indicate whether a difference
exists among the groups are not. Further analysis are required to find out which group differs
from the other.

o Assumption 1: The dependent variable need to be a continuous variable which
either has interval or ratio scale. Examples of variables include height, weight,
intelligence, personality, anxiety, etc.

o Assumption 2: The independent variable should consist of more than two groups
and the variable need to be categorical. Example can be three or more socio
economic group, religion, profession, etc.

o Assumption 3: The participants in one group should not be a participant in another
group. Therefore, the observations need to be independent to each other. If they
are not independent observations different analysis is used.

o Assumption 4: Data need to be normally distributed without being skewed or kurtic.

o Assumption 5:  There should be homogeneity of variance in the data.

11.2.2 TWO WAY ANOVA

The two-way ANOVA is used to identify the mean differences between groups that have
two groups under the independent variables (called factors). The main purpose of this test is to
find out whether there is any interaction between two independent variables on the dependent
variable. For example, we could use a two-way ANOVA to find out whether there is an interaction
between gender (Male and female) and family type (Nuclear and Joint) on adjustment among
school children. The interaction here means that whether the gender when combined with one
family type has more impact than the single independent variable per se. There are certain
assumptions for using two way ANOVA.

o Assumption 1: The dependent variable should be a continuous variable.

o Assumption 2: The two independent variable need to be categorical or we need to
categorise using median or mean cut off.
140

o Assumption 3: The observation should be independent in each group.

o Assumption 4: Data should be normally distributed without any significant outliers.

o Assumption 6: There should be homogeneity of variances for each combination
of the groups of the two independent variables.

11.2.3 ‘t’ test:

The t test is an important type of statistical analysis in inferential statistics. It is used to


determine whether there is a significant difference between the means of two groups. When
the difference between two population averages is being investigated, a t test is used.  t test is
used when the dependent variable is continuous and the independent variable has two groups.
We would use a t test if we wished to compare the IQ level of boys and girls.  This test also is
used as a post hoc test after ANOVA. ANOVA just indicates whether there is any difference
among three or more groups. In order to find which group is differing from the others, ‘t’ test can
be used.

· A t-test is a type of inferential statistic used to determine if there is significant


difference between the means of two groups, which may be related in certain features.

· The t-test is one of many tests used for the purpose of hypothesis testing in statistics.

· Calculating a t-test requires three key data values. They include the difference
between the mean values from each data set (called the mean difference), the
standard deviation of each group, and the number of data values of each group.

With a t test,  the researcher  wants  to  state with  some  degree  of confidence  that  the
obtained difference between the means of the sample groups is too great to be a chance event
and that some difference also exists in the population from which the sample was drawn. In
other words, the difference that we might find between the boys’ and girls’ reading achievement
in our sample might have occurred by chance, or it might exist in the population. If our t test
produces a t-value that results in a probability of .01, we say that the likelihood of getting the
difference we found by chance would be 1 in a 100 times. We could say that it is unlikely that
our results occurred by chance and the difference we found in the sample probably exists in the
populations from which it was drawn. The following are the assumptions of ‘t’ test.

o Assumption 1: The dependent variable should be a continuous variable.


o Assumption 2: There should be homogeneity of variances for the two groups.
o Assumption 3: The scores in the population are normally distributed.
141

There are many types of ‘t’ test itself. Let us look into two frequently used ‘t’ tests.

11.2.3.1 Paired t test (t-test for dependent groups, correlated t test)

This is concerned with the difference between the average scores of a single sample of
individuals who are assessed at two different times (such as pre and post intervention). It can
also compare average scores of samples of individuals who are paired in some way (such as
relation, characteristics, etc). The correlated t-test is performed when the samples typically
consist of matched pairs of similar units, or when there are cases of repeated measures. For
example, there may be instances of the same patients being tested repeatedly - before and
after receiving a particular treatment. In such cases, each patient is being used as a control
sample against themselves. This method also applies to cases where the samples are related
in some manner or have matching characteristics, like a comparative analysis involving children,
parents or siblings. Correlated or paired t-tests are of a dependent type, as these involve cases
where the two sets of samples are related.

It is the critical ratio to identify the differences between means, D is the difference in the
scores of two sets of correlated data; is the standard deviation; is the standard error;
is the mean of the difference scores. After calculating the t value the table value should be
compared in order to find out whether the difference between the two correlated sample are
significant.

11.2.3.2 ‘t’ test for Independent Samples (with two options)

This is concerned with the difference between the averages of two populations. Basically,
the procedure compares the averages of two samples that were selected independently of
each other, and asks whether those sample averages differ enough to believe that the populations
from which they were selected also have different averages. An example would be comparing
social skills of an experimental group with a control group.

Higher values of the t-value, also called t-score, indicate that a large difference exists
between the two sample sets. The smaller the t-value, the more similarity exists between the
two sample sets.
142

Where t value is the critical value, is the Mean of the first set of data and is the
Mean of the second set of data and are the variances. is number of
participants in the two groups. After calculating the t value it should be compared to the table
value and if only it is more than the table value the difference is significant between the two
groups.

Self learning exercise:

Give one example each for Paired ‘t’ test and Independent sample ‘t’ test.

11.3 PEARSON PRODUCT MOMENT CORRELATION

The Pearson product-moment correlation coefficient is a measure of the strength of a


linear relationship between two variables and is denoted by r. The Pearson correlation
coefficient, r, can take a range of values from +1 to -1. A value of 0 indicates that there is no
relationship between the two variables. A value greater than 0 indicates a positive relation; that
is, as the value of one variable increases, the value of the other variable also increases. A value
less than 0 indicates a negative relationship; that is, as the value of one variable increases, so
does the value of the other variable. This is shown in the diagram below:

 
143

The closer the value towards -1 or +1 the more strong the relationship between the two
variables, however, whether the relationship is positive or negative is understood using the + or
- sign. + or 1 sign indicates the direction of relationship between the two variables. The important
point to be clarified is that the Pearson’s correlation will not indicate which the dependent is or
independent variable. i.e. this analysis will not reveal the cause and effect among the variables
under study. For example, if r = -.67 between test anxiety and academic performance. That is,
as test anxiety is more the academic performance tend to be low. However, this does not reveal
whether the academic performance had resulted in test anxiety or vice versa.

X and Y represents the two sets of values for the two continuous variable which are to be
correlated. N is the total number of scores. After calculating the r value it should be compared
to the table value and if only it is more than the table value the difference is significant between
the two groups.

Self learning exercise

Take two sets of values and find whether it is significantly related to each other using
Pearson’s correlation method.

11.4 REGRESSION

Regression is a statistical measurement used to determine the strength of the relationship


between one dependent variable (usually denoted by Y) and a series of other changing variables
(known as independent variables). This is one of the advanced analysis used to identify the
cause and effect among the variables under study and the contribution of each variable towards
the dependent variable.

The two basic types of regression are linear regression and multiple linear regression.
Linear regression uses one independent variable to explain or predict the outcome of the
dependent variable Y, while multiple regression uses two or more independent variables to
predict the outcome.
144

The general form of each type of regression is:

· Linear regression: Y = a + bX + u

· Multiple regression: Y = a + b1X1 + b2X2 + b3X3 + ... + btXt + u

Where:

· Y = the variable that you are trying to predict (dependent variable).

· X = the variable that you are using to predict Y (independent variable).

· a = the intercept.

· b = the slope.

· u = the regression residual.

11.5 SUMMARY

This chapter helped us to understand why, when and where to use the parametric statistics.
Parametric tests are designed to represent the wide population, e.g. of a country or age group.
The different types of statistics that we saw in this chapter were one way ANOVA, Two way
ANOVA, t- test, correlation and regression. The first three respectively are used to find the
significant differences between groups wherein the first two analyses are used for comparing
more than two groups and third one is for comparing two groups on a dependent variable. The
other two analysis namely correlation and regression are used to find out the relationship among
variables and to identify the predictors of the dependent variable.

11.6 KEYWORDS

Parameter: A parameter is a characteristic of a population.

Statistics: A statistic is a characteristic of a sample.

Population: A population is  a  group  of  phenomena  that  possesses  some  common


features. 

Sample: A sample is a smaller group of members of a population selected to represent


the population.
145

Parametric statistics: They make assumptions about the wider population and the
characteristics of that wider population. It helps to take the inference from the sample to the
population.

One way ANOVA: The one-way analysis of variance (ANOVA) is used to find out whether
there are any statistically significant differences between the means of two or more independent

Two way ANOVA: The two-way ANOVA is used to identify the mean differences between
groups that have two groups under the independent variables (called factors).

‘t’ test: ‘t’ test is used to determine whether there is a significant difference between the
means of two groups.

Pearson’ Correlation: This is used to identify the relationship between two variables.

Regression: This analysis helps to predict the causative factors of the dependent variable.

11.7 CHECK YOUR PROGRESS

1. _________test is used to find the difference between two groups.

2. _______ ANOVA is used to find the difference between groups and if there are two
groups in independent variables.

3. Pearson correlation is used to find the relationship between two _______ variables.

4. Regression is used to find the _________ of dependent variables.

11.8 ANSWERS TO CHECK YOUR PROGRESS


1. ‘t’

2. Two way

3. Continuous

4. Predictors
146

11.9 MODEL QUESTIONS


1. State the advantages of parametric statistics over non parametric statistics

2. Explain the various parametric statistics used to find the difference between means
of groups.

3. What is the difference between correlation and regression? State the meaning and
utility of both.

REFERENCES

Aron, A., Coups, E.J., & Aron, E.N. (2013). Statistics for Psychology. New Jersey: Pearson
Education Inc.

Dowdy, S., Wearden, S., & Chilko, D. (2004) . Statistics for research. New Jersey: A John
Wiley & Sons, Inc. Publication.

Howell, D.C. (2010). Statistics methods for psychology. Belmont, CA: Wadsworth, Cengage
Learning.

Jackson, S.L. (2009). Research Methods and Statistics: A critical thinking


approach.Belmont, CA: Wadsworth Cengage Learning.
147

LESSON - 12
NON-PARAMETRIC STATISTICS
INTRODUCTION

In the previous lesson we learnt about few parametric statistics and its use. When the
data is not normally distributed or if the sample is not equally distributed between groups then
we may not be able to apply parametric statistics. In order to obtain the best out of the data
collected, few statistical tools are available to analyse these types of data also. Let us see few
of these statistical tools which broadly come under the umbrella called non-parametric statistics.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand the importance non parametric statistics

 To know when and how to use non-parametric statistics.

 To explain the criteria for using each and every statistical tool.

PLAN OF THE STUDY


12.1 Non-Parametric Statistics

12.1.1 Wilcoxon Test

12.1.2 Mann Whitney U Test

12.1.3 Kruskal-Wallis H Test

12.1.4 Friedman Test

12.1.5 Chi-Square

12.1.6 Rank Order

12.2 Summary

12.3 Key Words

12.4 Check Your Progress

12.5 Answers to Check your Progress

12.6 Model Questions


148

12.1 NON-PARAMETRIC STATISTICS

Non-parametric data cannot be statistically tested in the above ways. Non-parametric


statistical tests are used when:

1. the sample size is very small;

2. few assumptions can be made about the data;

3. data are rank ordered or nominal;

4. sample are taken from several different populations.

The levels of measurement of the variables, the number of samples, whether they are
related or independent are all factors which determine which tests are appropriate. Non-
parametric statistics are appreciated for their utility in small samples because they do not make
any assumptions about how normal, even and regular the distributions of scores will be.
Furthermore, computation of statistics for non-parametric tests is less complicated than that for
parametric tests. On the other hand, parametric tests are more powerful than nonparametric
tests because they not only derive from standardized scores but also enable the researcher to
compare sub-populations with a whole population.

12.1.1 WILCOXON TEST

The Wilcoxon test, which refers to either the Rank Sum test or the Signed Rank test, is a
nonparametric statistical test that are used to compare two paired groups. The test essentially
calculates the difference between each set of pairs and analyzes based on the differences. The
base assumptions necessary to employ this method of testing is that the data are from the
same population and are paired, the data can be measured on at least an interval scale, and
the data were chosen randomly and independently.

Nonparametric distributions do not have parameters and cannot be defined by an equation


as parametric distributions can. The model assumes that the data comes from two matched, or
dependent, populations. The data is also assumed to be continuous. Because it is a non-
parametric test it does not require a particular probability distribution of the dependent variable
in the analysis. The Signed Rank can be used as an alternative to the t-test when the population
data does not follow a normal distribution.
149

The steps for arriving at a Wilcoxon Signed-Ranks Test Statistic, W, are as follows:

1. For each item in a sample of n items, obtain a difference score Di between two


measurements (i.e., subtract one from the other).

2. Neglect then positive or negative signs and obtain a set of n absolute differences


|Di|.

3. Omit difference scores of zero, giving you a set of n non-zero absolute difference


scores, where n’ d” n. Thus, n’ becomes the actual sample size.

4. Then, assign ranks Ri from 1 to n to each of the |Di| such that the smallest absolute


difference score gets rank 1 and the largest gets rank n. If two or more |Di| are
equal, they are each assigned the average rank of the ranks they would have been
assigned individually had ties in the data not occurred.

5. Now reassign the symbol “+” or “–” to each of the n ranks Ri, depending on whether
Di was originally positive or negative.

6. The Wilcoxon test statistic W is subsequently obtained as the sum of the positive


ranks.

12.1.2 MANN WHITNEY U TEST

The Mann-Whitney U test is a non-parametric test that can be used in place of an unpaired
t-test. It is used to test the null hypothesis that two samples come from the same population (i.e.
have the same median) or, alternatively, whether observations in one sample tend to be larger
than observations in the other. Although it is a non-parametric test it does assume that the two
distributions are similar.

The Mann-Whitney U test is used to compare differences between two independent groups
when the dependent variable is either ordinal or continuous, but not normally distributed. For
example, Mann-Whitney U test is used to understand whether attitudes towards pay
discrimination, where attitudes are measured on an ordinal scale, differ based on gender.
Alternately, Mann-Whitney U test can be used to understand whether salaries, measured on a
continuous scale, differed based on educational level (i.e., your dependent variable would be
“salary” and your independent variable would be “educational level”, which has two groups:
“high school” and “university”). The Mann-Whitney U test is often considered the nonparametric
alternative to the independent t-test although this is not always the case.
150

 R is the sum of ranks in the sample, and n is the number of items in the sample.

There are few assumptions for Mann Whitney U test too. They are:

o Assumption 1: The dependent variable should be a continuous variable or ordinal
scale.

o Assumption 2: The independent variable should be two categorical variable.

o Assumption 3: The scores in the population are not required to be normally


distributed.

o Assumption 3: The scores in one group ought to be independent.

2.1.3 KRUSKAL-WALLIS H TEST

The Kruskal-Wallis H test (sometimes also called the “one-way ANOVA on ranks”) is a
rank-based nonparametric test that can be used to determine if there are statistically significant
differences between two or more groups of an independent variable on a continuous or ordinal
dependent variable. It is considered the nonparametric alternative to the one way ANOVA and
an extension of the Mann Whitney U test to allow the comparison of more than two independent
groups.

For example, we could use a Kruskal-Wallis H test to understand whether depression,


measured on a continuous scale, differed based on socio economic status (i.e., your dependent
variable would be “depression” and your independent variable would be “socio economic status”,
which has three independent groups namely “low”, “medium” and “high” socio economic status).

It is important to realize that similar to the ANOVA, we cannot specify which group is
different from the other in Kruskal-Wallis H test also; it only tells you that at least two groups
were different. Since you may have three, four, five or more groups in the research, determining
which of these groups differ from each other is important. We can do this using a post hoc test.

The Kruskal-Wallis test is a nonparametric (distribution free) test, and is used when the
assumptions of one-way ANOVA are not met. Both the Kruskal-Wallis test and one-way ANOVA
assess for significant differences on a continuous dependent variable by a categorical
151

independent variable (with two or more groups). In the ANOVA, we assume that the dependent
variable is normally distributed and there is approximately equal variance on the scores across
groups.  However, when using the Kruskal-Wallis Test, we do not have to make any of these
assumptions. Therefore, the Kruskal-Wallis test can be used for both continuous and ordinal-
level dependent variables. However, like most non-parametric tests, the Kruskal-Wallis Test is
not as powerful as the ANOVA. The formula for calculating Kruskal-Wallis Test is:

 12 k T j2 
H
 n (n  1)
 nj 
  3(n  1)
j1 

The first step is to rank all the observations. After that number is assigned. i.e , 1 =
smallest observation and n = largest observation, where n  n 1  n 2  ...  n k . In case of ties,
average the ranks. The degree to which this is true is judged by calculating the rank sums
(labelled T1 , T2 ,..., Tk ). K is the sample distributed.

12.1.4 FRIEDMAN TEST

The Friedman test is the non-parametric alternative to the one way ANOVA repeated
measures design. It is used to test for differences between groups when the dependent variable
being measured is ordinal. It can also be used for continuous data that has violated the
assumptions necessary to run the one-way ANOVA with repeated measures (e.g., data that
has marked deviations from normality).

When you choose to analyse your data using a Friedman test, part of the process involves
checking to make sure that the data you want to analyse can actually be analysed using a
Friedman test.

In cases of this sort, a useful  non—parametric alternative can be found in a rank-based
procedure known as the Friedman Test.

There are two kinds of correlated-samples situations where the advisability of the non-
parametric alternative would be fairly obvious. The first would be the case where the measures
for each subject start out as mere rank-orderings.|

And the |second would be the case where these measures start out as mere ratings. In
both of these situations the assumption of an equal-interval scale of measurement is clearly not
met. There’s a good chance that the assumption of a normal distribution of the source
152

population(s) would also not be met. Other cases where the equal-interval assumption will be
thoroughly violated include those in which the scale of measurement is intrinsically non-linear:
for example, the decibel scale of sound intensity, the Richter scale of earthquake intensity, or
any logarithmic scale.

 12
k 
2
Fr  
 b(k )(k  1)
T j 

 3b(k  1)
 j1 

To calculate the test statistic, we first rank each observation within each block, where 1 =
smallest observation and k = largest observation, averaging the ranks of ties. Then we compute
the rank sums, which we label T1 , T2 ,..., Tk . b is the total number of blocks. k is the sample
distribution.

Self learning exercise

Identify the difference in using the different statistical test namely Wilcoxon signed rank
test, Kruskall Wallis test, Friedman’s test and Mann Whitney test

12.1.5 CHI-SQUARE

The Chi Square statistic is commonly used for testing relationships between categorical


variables.  The  null  hypothesis  of  the  Chi-Square  test  is  that  no  relationship  exists  on  the
categorical variables in the population or they are independent.

The Chi-Square statistic is most commonly used to evaluate the association between two
categorical variables. Crosstabulation presents the distributions of two categorical variables
simultaneously, with the intersections of the categories of the variables appearing in the cells of
the table. The Test of Independence will find out whether the two variables are associated by
comparing the observed pattern of responses in the cells to the pattern that would be expected
if the variables were truly independent of each other. Calculating the Chi-Square statistic and
comparing it against a critical value from the Chi-Square distribution allows the researcher to
assess whether the observed frequencies are significantly higher than the expected frequencies.
153

The calculation of the Chi-Square statistic is quite straight-forward and intuitive:

where O = the observed frequency (the observed counts in the cells)

and E = the expected frequency

The chi-square is also called Pearson’s chi-square test or the chi-square test of association
as it is used to discover if there is a relationship between two categorical variables. If we decide
to use chi-square as a test for independence, we need to make sure that the data holds two
assumptions. These two assumptions are:

o Assumption 1: The two variables should be categorical and measured using ordinal
or nominal scale.

o Assumption 2: The two variables should have 2 or more groups which are
independent to each other. Example independent variables like gender (2 groups:
Males and Females), profession (e.g., 5 groups: surgeon, doctor, nurse, dentist,
therapist), and so forth.

The chi-square is also used as a goodness of fit test. It determines if a sample data
matches a population using the same formula.

Self learning exercise

Find out whether gender and stress are associated using chi square for the following data
at the significant level of .05 or .01 level?
Stress
High Low

Males 35 40

Females 43 31
154

12.1.6 RANK ORDER

The Spearman rank-order correlation coefficient (Spearman’s correlation, for short) is a


nonparametric measure of the strength and direction of association that exists between two
variables measured on at least an ordinal scale. It is denoted by the symbol rs (or the Greek
letter ñ, pronounced rho). The test is used for either ordinal variables or for continuous data that
has failed the assumptions necessary for conducting the Pearson’s product-moment correlation.
For example, you could use a Spearman’s correlation to understand whether there is an
association between exam performance and time spent revising; whether there is an association
between depression and loneliness; and so forth.

When we analyse your data using Spearman’s correlation, part of the process involves
checking to make sure that the data you want to analyse can actually be analysed using a
Spearman’s correlation. You need to do this because it is only appropriate to use a Spearman’s
correlation if your data “passes” two assumptions that are required for Spearman’s correlation
to give you a valid result. There are two assumptions to use this test. They are:

o Assumption 1: The two variables should be measured using either ordinal, interval
or ratio scale.

o Assumption 2: There is a monotonic relationship between the two variables. A
monotonic relationship exists when either the variables increase in value together,
or as one variable value increases, the other variable value decreases.

is the symbol of Spearman’s correlation coefficient and d is the difference between the
ranks of two variables of the individuals scores. n is the total number of data or scores.

For ordinal-level data, the Spearman rank order correlation is one of the most common
methods to measure the direction and strength of the association between two variables. First
put forth by British psychologist Charles E. Spearman in a 1904 paper, the nonparametric (i.e.,
not based on a standard distribution) statistic is computed from the sequential arrangement of
the data rather than the actual data values themselves. The Spearman rank order correlation is
a specialized case of the Pearson product-moment correlation that is adjusted for data in ranked
155

form (i.e., ordinal level) rather than interval or ratio scale. It is most suitable for data that do not
meet the criteria for the Pearson product-moment correlation coefficient (or Pearson’s r), such
as variables with a non-normal distribution.

Self learning exercise

Find out whether the individual’s mark in English are correlated to marks in Mathematics
using Spearman’s formula:

English 80 60 40 78 90 66 89 70 50 55

Mathematics 100 90 50 90 60 70 90 88 80 45

12.2 SUMMARY

This lesson emphasized on the various non parametric statistical method with its
assumptions to carry out the analysis. Each statistics also has been compared with the
corresponding parametric statistics. Mann Whitney U test, Wilcoxon and Kruskall Wallis and
Friedman Tests are used to find the difference between groups with various criteria. Rank order
or Spearman’s correlation method is used to find the correlation between two variables. Chi
square is used to test the association between variables as well as the goodness of fit of the
variables under study.

12.3 KEYWORDS

Wilcoxon test: The Wilcoxon test, which refers to either the Rank Sum test or the Signed
Rank test, is a nonparametric statistical test that are used to compare two paired groups.

Mann Whitney U test: The Mann-Whitney U test is a non-parametric test that can be
used in place of an unpaired t-test.

Kruskall Wallis test: The Kruskal-Wallis H test is used to determine if there are statistically
significant differences between two or more groups of an independent variable on a continuous
or ordinal dependent variable.

Friedman test: The Friedman test is the non-parametric alternative to the one way ANOVA
repeated measures design.
156

Chi square: The Chi Square statistic is commonly used for testing relationships between
categorical variables.

Rank order: The Spearman rank-order correlation coefficient (Spearman’s correlation,


for short) is a nonparametric measure of the strength and direction of association that exists
between two variables measured on at least an ordinal scale.

12.4 CHECK YOUR PROGRESS


1. If we need to find the difference between paired groups, then we use __________
test.

2. What analysis is used in the place of Repeated measures test?

3. Which test is used to test the association between variables?

12.5 ANSWERS TO CHECK YOUR ANSWERS


1. Wilcoxon

2. Friedman

3. Chi square

12.6 MODEL QUESTIONS


1. Explain in detail different non parametric statistics we use in social science research.

2. Discuss how Pearson and Spearman’s correlation differ in the methodology and
utility.

REFERENCES
Aron, A., Coups, E.J., & Aron, E.N. (2013). Statistics for Psychology. New Jersey: Pearson
Education Inc.

Dowdy, S., Wearden, S., & Chilko, D. (2004) . Statistics for research. New Jersey: A John
Wiley & Sons, Inc. Publication.

Howell, D.C. (2010). Statistics methods for psychology. Belmont, CA: Wadsworth, Cengage
Learning.

Jackson, S.L. (2009). Research Methods and Statistics: A critical thinking


approach.Belmont, CA: Wadsworth Cengage Learning.
157

LESSON - 13
CONTENT ANALYSIS AND THEMATIC ANALYSIS
INTRODUCTION

Qualitative research is conducted separately or sometimes along with quantitative research


which helps one to substantiate or further understanding the result obtained in quantitative
analysis. There are statistical procedures to analyze the quantitative data. Similarly, there are
certain methods of analysis to interpret the qualitative data. Few of the methods includes content
analysis, thematic analysis, narrative analysis, discourse analysis, phenomenological approach
etc.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand what is content analysis and how to execute it.

 To explain the utility of thematic analysis and the difference between content and
thematic analysis.

PLAN OF THE STUDY


13.1 Content Analysis

13.1.1 Key Terms in Content Analysis

13.1.1.1 Coding Unit

13.1.1.2 Sampling

13.1.1.3 Category Systems

13.1.2 The General Research Paradigm

13.1.3 How are the Facts Presented?

13.1.4 What Effect?

13.2 Thematic Analysis

13.2.1 A Basic Approach to Thematic Analysis

13.2.2 A More Sophisticated Version of Thematic Analysis

13.2.3 A More Sophisticated Version of Thematic Analysis


158

13.3 Summary

13.4 Key Words

13.5 Check your Progress

13.6 Answers to Check your Progress

13.7 Model Questions

13.1 CONTENT ANALYSIS

The term content analysis in general describes a wide-range of techniques to describe


and explicate a communication or series of communications in a systematic, objective, and
quantitative manner. In many ways, the data of most common types of content analyses resemble
those obtained in open-ended, exploratory interviews. These interviews impose no restraints
on the questions of the interviewer or the allowable responses of the participant. The researcher
has no control over the responses of the respondents. Similarly, in most content analyses, the
investigator is concerned with a communication that (a) was not elicited by some systematic set
of questions chosen by the analyst, (b) probably does not contain all the information he or she
would like it to contain, and (c) is almost invariably stated in a way not easily codified and
analyzed. In both research contexts, the interview or content analysis, the investigator must
transform these qualitative unstructured messages into useful data for scientific, quantitative
analysis. As a major component of social research, the role of communication should not prove
surprising that many social scientists should specialize in research focused on the communication
process per se. The content analyses should take into consideration of “who says what, to
whom, how, and with what effect” (Lasswell, Lerner, & Pool, 1952, p. 12). Usually the social
researcher focuses on only one or two of the components of this question. The content analyst
is particularly interested and focus on only one or two components like in the what and the how
of the process, that is, with the particular content of a message and the particular manner in
which this content is delivered or expressed.

A few definitions of content analysis are,

“Content analysis is a research technique for the objective, systematic, and quantitative
description of the manifest content of communication” (Berelson, 1952, p. 18)

“Content analysis is a research technique to the objective, systematic, and quantitative


description of any symbolic behavior” (Cartwright, 1953, p. 424)
159

“Content analysis is a method of observation and not just a method of analysis. Instead of
observing people’s behavior directly, or asking them to respond to scales, or interviewing them,
the investigator takes the communications that people have produced and asks questions of
the communications” (Kerlinger, 1964, p. 544)

“Content analysis is a technique used to extract desired information from a body of material
(usually verbal) by systematically and objectively identifying specified characteristics of the
material” (Smith, 2000, p. 314)

“Content analysis is a research technique for making replicable and valid inferences from
data to their context” (Krippendorff, 1980, p . 21).

In the realm of content analysis, some researchers insist that the technique be applied
only to the manifest content of the materials under study, and others allow some degree of
inference making, based on the content and the context in which it occurs.

This categorization can be done based on literature or derive categories from your research
question via a clear chain of reasoning or to derive categories from the data.

13.1.1 KEY TERMS IN CONTENT ANALYSIS

13.1.1.1 Coding Unit

In any observational method, the unit of content that will be employed in the investigation
must be determined in advance. Once a coding scheme has been decided, the investigator is
faced with a series of decisions, and these decisions parallel those of the general observational
methodologist. First, the researcher to decide when or where one unit of behavior ends and
another begins. In general observational methodology, often is employed in the definition of the
unit of behavior to be categorized. In other cases, the attention or focus that a particular child
under observation directs toward another object (e.g., a toy, the teacher, another child) defines
the unit. In content analysis, unit issues of a similar type exist, even though we are dealing with
text, rather than human actors. However, a distinction is made between the specific unit to be
classified (the coding unit) and the context within which its meaning is to be inferred (the context
unit). Sometimes these units are identical. More often, the unit to be coded is analyzed within a
prespecive block of material that constitutes the context unit.

Coding units most commonly employed are the word, the theme or assertion (usually a
simple sentence derived from a more complex context), the item (e.g., a news story or editorial)
160

and the character (a specific individual or personality type). A sample sentence can be easily
broken down into component parts (themes or assertions), not all sentences encountered in a
research situation are so amenable to analysis. In the case of more complex stimuli, judges
often disagree over the identification of themes, and then about the meaning of themes that are
identified. The more complex the stimuli investigated, the more likely it is that such disagreements
are encountered lead to compromise reliability and hence, validity. If coding problems can be
resolved, thematic analyses, or those making use of both themes and words as coding units,
generally provide more information than analyses based on words alone. The coding and context
units employed in an investigation are seldom the same. The context unit, of course, can never
be smaller than the coding unit; in the case of highly restricted coding units (e.g., the theme or
assertion), context units usually entail more extensive amounts of text. Limits are placed on the
size of the context unit for two purposes. The most important is to protect reliability. If coders
were free to check as much or as little of the content as they desired in classifying an assertion,
differences between coders in amount of context surveyed might cause differences in evaluations.
The second reason is economy. Coders are expensive, and some limits must be imposed on
the amount of time they are permitted to spend in the classification of a given theme.

13.1.1.2 Sampling

Decisions concerning the way the sample of messages to be analyzed is chosen are
closely related to the content analyst’s choice of coding and context units. Such sampling usually
involves a multistage operation. In the first stage, the particular universe of content and of
sources from which all data are to be drawn is identified. Depending on the research problem,
the extensiveness of this universe can vary greatly.

13.1.1.3 Category Systems

To achieve scientific respectability, research operations must be systematic. This maxim


is true across the methodological spectrum and holds as well in the arena of content analysis.
One of the major ways of introducing systematization to this area is through the use of prespecified
classification systems in the coding of content. Certainly, intuitive analyses of text are valuable
and often quite entertaining. Rather than rely on the intuitive classification of a message or
series of messages, therefore, content analysts make use of coding schemes through which
relevant dimensions of content are systematically identified and compared in some way. In the
arena of content analysis, parallel aspects of the content of a communication are of central
relevance. Generating a coding system is similar in both cases, but the content coding system
161

is more linguistically oriented because it typically is directed toward the categorization and
analysis of a source’s verbal or written outputs. Many different coding systems have been
employed by content analysts, studying almost every conceivable aspect of written or spoken
messages. One of the major criticisms of this field, in fact, concerns its failure to generate a
mutually agreed-on system of coding, through which diverse content can be investigated and
compared. It appears that researchers are more intent on individually tailoring a coding scheme
to fit their particular research problem than on developing more generally employable techniques.
Because the number of systems in the content analysis literature is so extensive, it is likely that
an investigator willing to search through the appropriate journals will be able to obtain a “pre-
used” coding system suitable for almost any research need.

13.1.2 THE GENERAL RESEARCH PARADIGM

Before attempting to provide a picture of the scope of this technique and the range of
issues to which it has been addressed, a brief mention of the research paradigm usually employed
in this area is appropriate. Not surprisingly, the series of operations to be described coincide
closely with those discussed in an earlier chapter in conjunction with observational research
techniques. The scientific analysis of message content will usually involve the use of a perspective
coding system. The choice of coding system is best made on the basis of information that is
relevant to the data to be categorized. In the case of a content analysis, this means that the
researcher, in advance of the choice or construction of a coding scheme, must become thoroughly
familiar with the general body of content under consideration. Only then is the investigator in a
position to make an informed and reasoned decision regarding the most appropriate classificatory
system. Closely bound to the choice of coding scheme are decisions regarding the appropriate
units of analysis, and the particular manner in which these units will be sampled from the larger
universe of potential sources. All these decisions must be made in harmony with the others; the
goodness of such choices is a function of the extensiveness of the preliminary investigation of
message content.

13.1.3 HOW ARE THE FACTS PRESENTED?

Investigations of this question have focused generally on the form or style of the
communication. Although the same information might be presented in two communications one
message might prove to be considerably more influential than the other because of the way the
facts were presented. Analysis of the way in which messages are structured constitutes the
primary goal of investigations.
162

Verbatim analysis: Verbatim analysis involves counting the instances of a particular


word or phrase verbatim – that is, the identical words, as opposed to words which in your
opinion are equivalent. Verbatim analysis typically involves large numbers of categories, each
of which only has a few instances in it. It has the advantage of working well with software
analysis – you can just do a search for instances of each term.

Gist analysis: Gist analysis is the next layer up. This is where one has to decide which
verbatim categories are actually synonymous alternative phrasings of the same thing – for
instance, ‘Windows-compatible’ and ‘compatible with Windows’. You can then create a gist
category which includes all of the variant phrasings, add together the number of instances of
each variant phrasing, and put the total in the box for that gist category. At this point you can do
some soul searching, if you feel so inclined, about whether you should have different names for
the gist-level categories, or whether you can reuse a verbatim-level category name at the gist
level.

13.1.4 WHAT EFFECT?

In other than the most highly restricted situations, potential problems of faulty generalization
are acute in such studies. In many content analytic situations, there are no appropriate solutions
to such inference problems. Nevertheless, some interesting attempts have been made to link
the content of media presentations with specific social effects. Most investigations dealing with
the effect of a communication have been undertaken within more constricted boundaries. Studies
of the ease with which a communication can be comprehended by a reader (or group of readers)
provide a good example of this type of highly restricted research.

As was stated earlier, content analysis is a good technique for generating and enriching
research hypotheses. Using the method to test hypotheses is less common in psychology; it is
probably for this reason that content analysis is relatively underrepresented in social psychological
research, where theory testing is often valued over hypothesis development. However, there is
ample evidence in related disciplines (e.g., communication research and political science) that
with appropriate controls and understanding of the boundaries, the technique can be used to
test theory. We envision a larger role for this methodology in the future, in all fields of social
science research. The ever-increasing availability of data sources, combined with developments
in machine-based coding and analysis, bodes well for the field.
163

13.2 THEMATIC ANALYSIS

Almost certainly, thematic analysis is the approach to qualitative analysis most likely to be
adopted by newcomers to qualitative analysis. No particular theoretical orientation is associated
with thematic analysis and it is flexible in terms of how and why it is carried out. In a sense, it is
at entry level a somewhat undemanding approach to the analysis of qualitative data – interviews
in particular. Thematic analysis does not demand the intensely and closely detailed analysis
which typifies conversation analysis, for example. There is no accepted or standardised approach
to carrying out a thematic analysis. While this is typical of qualitative methods in general, it
clearly is an obstacle to carrying out thematic analysis. So it is impossible to provide a universally
acceptable set of guidelines which, effortlessly, will lead to a good thematic analysis. Nevertheless,
the key aspects of thematic analysis can be identified. Sometimes very basic and unsystematic
approaches form the basis of thematic analysis. The researcher simply reads through their
data in transcribed form and tries to identify, say, half a dozen themes which appear fairly
commonly in the transcripts. Then the researcher writes a report of their data analysis in which
they lace together the themes that they have identified with illustrative excerpts from the
transcripts. The problem with such an approach is that the researcher is not actually doing a
great deal of analytic work. However, it is unclear how the researcher processed their data to
come up with the themes; it is unclear the extent to which the themes encompass the data – do
the themes exhaust the data or merely cover a small amount of the transcribed material?

The phrase thematic analysis first appeared in the psychological journals in 1943 but is
much more common now. Typically, instead of describing in detail how the analysis was done,
thematic analysts simply write something like ‘a thematic analysis was carried out on the data’.
However, carried out properly, thematic analysis is quite an exacting process requiring a
considerable investment of time and effort by the researchers. Just as the label says, thematic
analysis is the analysis of textual material (newspapers, interviews and so forth) in order to
indicate the major themes to be found in it. In thematic analysis the researcher does not identify
the overall topic of text. Instead the researcher would dig deeper into the text of the lecture to
identify a variety of themes which describe significant aspects of the text. In the case of many
texts such as in-depth interviews or transcripts of focus groups, people talking in these
circumstances simply do not produce highly systematic and organised speech. More generally
researchers use material from a wider range of individuals or focus groups, for example. There
are other methods of qualitative research which seems to compete with thematic analysis in the
sense that they take text and, often, identify themes. The lack of a clear theoretical basis to
164

thematic analysis does not mean that theory is not appropriate to your research – it merely
means that the researcher needs to identify the theoretical allegiance of his or her research. At
the very basic level, thematic analysis can be described as merely empirical as the researcher
creates the themes simply from what is in the text before him or her; this may be described as
an inductive approach. The researcher may be informed by theory on the other hand, in terms
of the aspects of the text to examine and in terms of the sorts of themes that should be identified
and how they should be described and labelled. If there is a theoretical position which informs
the analysis, then this should be discussed by the researcher in the report of their analysis; in
this sense, the analysis may be theory driven.

13.2.2 A BASIC APPROACH TO THEMATIC ANALYSIS

The basic essential components of a thematic analysis are transcription, analytic effort
and theme identification. It is important to note that the three stages are only conceptually
distinct: in practice they overlap considerably. Briefly, the components can be described as
follows:

Transcribing textual material: This can be based on any qualitative data collection method
including in-depth interviews and focus groups. The level of transcription may vary from a
straightforward literal transcript much as a secretary would produce. The text which contains a
great deal more information than the literal transcription. No qualitative researcher should regard
transcription as an unfortunate but necessary chore since the work of transcribing increases
the familiarity of the researcher with his or her material. In other words, the transcription process
is itself a part of the analysis process. In the best case circumstances, the researcher would
have conducted the interviews or focus groups themselves and then transcribed the data
themselves. Thus the process of becoming familiar with the text starts early and probably
continues throughout the analysis.

Analytic effort: Analytic effort refers to the amount of work or processing that the
researcher applies to the text in order to generate the final themes which are the end point of
thematic analysis. There are several components to analytic effort: (i) the process of becoming
familiar with the text so that understanding can be achieved and is not based on partial knowledge
of the data (ii) the details with which the researcher studies his or her data which may range
from a line-by-line analysis to a much wider approach which seeks to summarise the overall
themes (iii) the data to be processed and reprocessed by the extent to which the researcher is
prepared to achieve as close a fit of the analysis to the data as possible (iv) the extent of
165

analysis is presented with difficulties during the course of the analysis which have to be resolved
and (v) the willingness of the researcher to check and recheck the fit of his or her analysis to the
original data.

Identifying themes and sub-themes: While this appears to be the end point of a thematic
analysis, researchers will differ considerably in terms of how carefully or fully they choose to
refine the themes which they suggest on the basis of their analysis. The researcher may be
rapidly satisfied with the set of themes since they seem to do a ‘good enough’ job of describing
what they see as key features of the data. Another researcher may be dissatisfied at this stage
with the same themes because they realise that the themes, for example, describe only a part
of the data and there is a lot of material which could not be coded under these themes. Hence,
the latter researcher may seek to refine the list of themes in some way, for example, by adding
themes and removing those which seem to do a particularly poor job of describing the data. Of
course, by being demanding in terms of the analysis, the researcher may find that they need to
refine all of the themes and may find that for some of the themes substantial sub-themes
emerge. Also, again as a consequence of being demanding, the researcher may find it harder
to name and describe the new or refined themes accurately. All of this continues the analytic
work through to the end of the total thematic analysis.

13.2.3 A MORE SOPHISTICATED VERSION OF THEMATIC ANALYSIS

Braun and Clarke (2006) provide what is probably the most systematic introduction to
doing thematic analysis to date. This is a fully fledged account of thematic analysis which seeks
to impose high standards on the analyst such that more exacting and sophisticated thematic
analyses are developed. They write of the ‘process’ of doing a thematic analysis which they
divide into six separate aspects that very roughly describe the sequence of the analysis, though
there may be a lot of backtracking to the earlier aspects of the process in order to achieve the
best possible analysis. The simple approach as described previously includes some elements
similar to the Braun–Clarke approach but they are aiming for a somewhat more comprehensive
and demanding kind of thematic analysis which, to date, has only been rarely approached.
Their six aspects or steps are:

1. Familiarisation with the data

2. Initial coding generation

3. Searching for themes based on the initial coding


166

4. Review of the themes

5. Theme definition and labelling

6. Report writing.

Thematic analysis involves three crucial elements – (i) the data, (ii) the coding of data and
(iii) the identification of themes. The procedure essentially stresses the way in which the
researcher constantly loops back to the earlier stages in the process to check and to refine the
analysis. In other words, the researcher constantly juxtaposes the data and the analysis of the
data to establish the adequacy of the analysis and to help refine the analysis. A good analysis
requires a considerable investment of time and effort.

13.3 SUMMARY

Content and thematic analyses are the two important qualitative analysis used for analysing
data obtained in qualitative research. This helps one to understand and come up with more
clarity and understand of the topic under study which helps one understand the topic of research
with more clarity. The researcher can summarize the inferences from the text, look for patterns,
regularities and relationships between segments of the text, and test hypotheses. The
summarizing of categories and data is an explicit aim of statistical techniques, for these permit
trends, frequencies, priorities and relationships to be calculated. At the stage of data analysis
there are several approaches and methods that can be used. But this is no different from the
challenge facing most researchers irrespective of the method they employ.

13.4 KEY WORDS

Content analysis: Content analysis is used to describe and explicate a communication


or series of communications in a systematic, objective, and quantitative manner.

Thematic analysis: Thematic analysis is the analysis of textual material (newspapers,


interviews and so forth) in order to indicate the major themes to be found in it.

Verbatim analysis: Verbatim analysis involves counting the instances of a particular


word or phrase verbatim – that is, the identical words, as opposed to words which in your
opinion are equivalent.

Gist analysis: Gist analysis is where one has to decide which verbatim categories are
actually synonymous alternative phrasings of the same.
167

13.5 CHECK YOUR PROGRESS

1. ______________refers to the amount of work or processing that the researcher applies


to the text in order to generate the final themes

2. ______________most commonly employed are the word, the theme or assertion, the
item and the character.

3. _____________analysis involves counting the instances of a particular word.

4. What are the three basic elements of thematic analysis?

13.6 ANSWERS TO CHECK YOUR PROGRESS


1. Analytic effort

2. Coding units

3. Verbatim

4. Data, Coding of data and identification of themes

13.7 MODEL QUESTIONS


1. Write an essay on content analysis?

2. Explain how thematic analysis is done?

3. Statehow facts are represented in content analysis?

REFERENCES

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. London,
England: Routledge.

Cozby, P. C., & Bates, S. C. (2015). Methods in Behavioural Research (12th ed). New
York, NY: McGraw Hill Education.

Crano, W. D., & Brewer, M. B. (2002). Principles and Methods of Social Research. Mahwah,
NJ: Lawrence Erlbaum Associates Publishers.

Gravetter, F. J., & Forzano, L-A. B. (2012). Research Methods for the Behavioural Sciences
(4th ed). Belmont, CA: Wadsworth Cengage Learning.
168

Howitt, D., & Gramer, D. (2011). Introduction to Research Methods in Psychology. Harlow,
Essex: Pearson Education Inc.

Leary, M. R. (2001). Introduction to Behavioural Research (3rd ed). Needham Heights,


MA: Allyn and Bacon.

Neuman, W. L. (2014). Social Research Methods: Qualitative and Quantitative Approaches.


New York, NY: Pearson Education Ltd.

Rugg, G., & Petre, M. (2007). A Gentle Guide to Research Methods. NY: McGraw Hill
Education.

Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2012). Research Methods


in Psychology (9th ed). New York, NY: McGraw Hill Education.

Walliman, N. (2011). Research Methods: The Basics. Oxon: Routledge


169

LESSON - 14
NARRATIVE ANALYSIS AND DISCOURSE ANALYSIS
INTRODUCTION

Having seen two methods of qualitative analysis namely content and thematic analysis,
we are going to look into two more important methods of qualitative analysis namely narrative
analysis and discourse analysis. These two methods are unique and interesting in its own way.
However, each has their own advantages and disadvantages. Let us look into the intricacies of
conducting narrative and discourse analysis in detail.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand what is narrative analysis and how to do it.

 To explain the utility of discourse analysis.

PLAN OF THE STUDY


14.1 Narrative Analysis

14.1.1 Tools of narrative analysis

14.1.2 The emergence of narrative analysis

14.2 Discourse Analysis

14.2.1 The agenda of discourse analysis

14.2.2 Doing discourse analysis

14.3 Summary

14.4 Key Words

14.5 Check your progress

14.6 Answers to Check your Progress

14.7 Model Questions


170

14.1 NARRATIVE ANALYSIS

This form of analysis is aimed at extracting themes, structures, interactions and


performances from stories or accounts that people use to explain their past, present or their
interpretations of events. The data, which is primarily aural, is collected by semi- or unstructured
interviews, participant observation or other indirect methods. The narrative is analyzed from
different aspects, such as what is said rather than how, or conversely, the nature of the
performance during the telling, and perhaps how the storyteller reacted with the listener(s).
Alternatively, the structure of the story is inspected. All this is done in order to reveal the
undercurrents that may lie in the simple narrative of the story.

“Narrative analysis”, a type of narrative inquiry (Polkinghorne, 1995), moves from the
particular data gathered to the construction of stories. A case study is an example of a “narrative
analysis”. In a narrative analysis, one constructs a narrative using the data gathered from each
story. The story written, “must fit the data while at the same time bringing an order and
meaningfulness that is not apparent in the data themselves” (Polkinghorne, 1995, p.16). Attending
to the characteristics of a narrative—plot, setting, characters—discussed earlier in this chapter,
a story is built by integrating the data rather than separating, as it would be done in an analysis
of narrative. A storied analysis is a “method that returns a story to the teller that is both hers and
not hers; that contains herself in good company” (Grumet, 1988, p. 70). By “re-storying” the
narratives received, one returns the stories to the participants. When done well, the researcher,
sets the stage, frames the time, and relates or sequences the events, happenings, and
experience, conveying a sense of meaning and significance. Therefore, reconfiguring individual
and shared themes, emplotting them into stories.

Narrative, as well as the related idea of analyzing a sequence of events, has multiple
meanings and is used in anthropology, archaeology, history, linguistics, literary criticism, political
science, psychology, and sociology. In addition, narrative refers to a type of qualitative data, a
form of inquiry and data gathering, a way to discuss and present data, a set of qualitative data
analysis techniques, and a kind of theoretical explanation. As Griffin (1992a:419) observed,
“Narrative is both a rhetorical form and a generic, logical form of explanation that merges
theorized description of an event with its explanation.” Narratives as a way to examine the world
have several features: a connected relationship among parts, a causal sequence of episodes
to form a “plot,” a selection that emphasizes important versus less important parts, and a specific
mix of time and place.
171

Narrative analysis, a method for analyzing data and providing an explanation, takes several
forms. It is called analytic narrative, narrative explanation, narrative structural analysis, or
sequence analysis. Besides recognizing the core elements of a narrative, you may use narrative
analysis techniques to map the narrative and give it a formalized grammar/structure. As you
examine and analyze qualitative data for its narrative form and elements—whether it is an
individual’s life history, a particular historical event, the evolution of an organization over the
years, or a macro-level historical process—you focus on events (rather than variables, individuals,
or cases) and connections among them. You find that temporal features (e.g., order, pace,
duration, frequency) are essential organizing concepts. You soon start to treat the sequence of
events itself as an object of inquiry. Franzosi (1998) argued that once we recognize narrative
within data, we try to extract and preserve it without destroying its meaning-making ability or
structure. As we map the structure of a narrative’s sequence, the process operates as both a
mode of data analysis and a type of explanation. Some researchers believe that narrative
explanations are not causal, but others believe narrative analysis is a causal explanation although
perhaps involving a different type of causality, from that common in a traditional positivist science
approach.

Narrative analysis refers to a family of analytic methods for interpreting texts that have in
common a storied form. As in all families, there is conflict and disagreement among those
holding different perspectives. Analysis of data is only one component of the broader field of
narrative inquiry. Methods are case centered, and the cases that form the basis for analysis can
be individuals, identity groups, communities, organizations, or even nations. Methods can be
used to interpret different kinds of texts—oral, written, and visual. The term narrative is illusive,
carrying many meanings and used in a variety of ways by different scholars, often used
synonymously with story. In the familiar everyday form, a speaker connects events to a sequence
that is consequential for later action and for the meanings listeners are supposed to take away
from the story. Events are perceived as important, selected, organized, connected, and evaluated
as meaningful for a particular listener. The definition emphasizes the contextual nature of oral
stories; they are told (indeed performed) with the active participation of an audience and are
designed to accomplish particular aims. Oral stories are strategic, functional, and purposeful.
Other forms of oral communication include chronicles, reports, arguments, and question and
answer exchanges. Among scholars working in the human sciences with personal (first-person)
accounts for research purposes, the narrative unit can differ, and its form is often linked to a
discipline. In anthropology and social history, narrative can refer to a life story that the researcher
weaves from threads of interviews, observations, and documents. At the other end of the
172

continuum lies the very restrictive definition of social linguistics. Here, narrative refers to a
discrete unit of discourse, an extended answer by a research participant to a single question,
topically centered and temporally organized.

14.1.1 TOOLS OF NARRATIVE ANALYSIS

We next examine three analytic tools: path dependency, periodization, and historical
contingency.

1. Path dependency. The way that a unique beginning can trigger a sequence of events
and create a deterministic path is called path dependency. The path is apparent in a chain of
subsequent events, constraining or limiting the direction of the ongoing events that follow. The
outcome explained using path dependency is sensitive to events that occurred very early in the
process. Path dependency explanations emphasize how the choices of one period can limit
future options, shape later choices, and even accelerate events toward future crises in which
options may be restricted. Explanations that use path dependency assume that the processes
that generated initial events (a social relationship) or institution may differ from the processes
that keep it going. There may be one explanation for the “starting event” and another for the
path of subsequent events. Researchers often explain the starting event as the result of a
contingent process (i.e., a specific and unique combination of factors in a particular time and
place that may never repeat). Path dependency comes in two forms: self reinforcing and reactive
sequence. If you use a self-reinforcing path dependency explanation, you examine how, once
set into motion, events continue to operate on their own or propel later events in a direction that
resists external factors. An initial “trigger event” constrains, or places limits on, the direction of
a process. Once a process begins, “inertia” comes into play to continue the process along the
same path or track. The reactive sequence path dependency emphasizes a different process.
It focuses on how each event responds to an immediately preceding one. Thus, instead of
tracing a process back to its origins, it studies each step in the process to see how one influences
the immediate next step. The interest is in whether the moving sequence of events transforms
or reverses the flow of direction from the initial event. The path does not have to be unidirectional
or linear; it can “bend” or even reverse course to negate its previous direction.

2. Periodization. In historical-comparative research, we know that historical reality flows


as discontinuous stages. To recognize this, researchers may use periodization to divide the
flow of time in social reality into segments or periods. For example, we may divide 100 years of
history into several periods. We break continuous time into several discrete periods that we
173

define theoretically through periodization. Theory helps us to identify what is significant and
what is common within periods or between different periods. As Carr (1961:76) remarked, “The
division of history into periods is not a fact, but a necessary hypothesis.”

3. Historical contingency. Historical contingency refers to a unique combination of


particular factors or specific circumstances that may not be repeated. The combination is
idiosyncratic and unexpected from the flow of prior conditions. As Mahoney (2000) explained,
“Contingency refers to the inability of theory to predict or explain, either deterministically or
probabilistically, the occurrence of a specific outcome. A contingent event is therefore an
occurrence that was not expected to take place.” A contingent situation may be unexpected, but
once it occurs, it can profoundly influence subsequent events. Because many possible
idiosyncratic combinations of events occur, we use theory to identify important contingent events
for an explanation. We can combine historical contingency and path dependency. The path
dependency may be self-reinforcing to continue with inertia along one direction, or particular
events might set off a reaction that alters its direction. Along the flowing sequence of events
across time, periodic critical junctures may occur. The process or conditions that were initially
set into motion may resist change, or the contingent conditions may be powerful enough to
trigger a major change in direction and initiate a new path of events.

14.1.2 THE EMERGENCE OF NARRATIVE ANALYSIS

While the argument for narrative as method was instrumental for a great number of inquiries
into the personal sense making of experience (in different disciplines and on different experiential
topics), narrative as method is to be kept separate from what has traditionally been held under
the scope of narrative methods. Narratives whether acquired through particular elicitation
techniques, such as interviewing, or “found” in natural (private, public, or institutionalized)
interactional settings, typically are the result of a research stance or orientation. Critical vis-à-
vis traditional survey practices, the narrative interview that was designed and aimed to overcome
the common tendency to radically decontextualize and disconnect the respondents’ meaning
making efforts from the concrete setting for which they originally were designed and from the
larger socio-cultural grounds of meaning production (Mishler, 1986, p. 26). In recent years, a
number of qualitative, in-depth interviewing techniques have been designed to elicit explicitly
narrative accounts—some open-ended and unstructured, others semi-structured and guided;
the free association narrative interview method (Hollway & Jefferson, 2008), the biographic-
narrative interpretive method—an interview technique that leads into personal experience, lived
174

situations and life-histories (Wengraf, 2006), or narrative oriented inquiry (Hiles & Cermák,
2008), to name a few.

14.2 DISCOURSE ANALYSIS

The boundary between content analysis and discourse analysis is unclear. A useful rule
of thumb is that, if it involves actions then its discourse analysis, and if it doesn’t then its content
analysis in the static (rather than the generic sense). The actions may be various things, including
plots, narratives and conversations. As with content analysis, discourse analysis can be useful
for putting some numbers on to things where one suspects that there are regularities going on
(or significant absences lurking around). As with content analysis, there’s a fair chance of finding
what is already expected, and that it will be viewed specifically by barbarians as just another
batch of numbers being used to support your pre-existing arguments, rather than as something
which contributes anything new and useful to the debate.

There are various ways to locate a discourse analysis in relation to the wider academic
world. For instance, you can link it to a representation and/or a body of theory about discourse
(as opposed to theory about the topic which discourse analysis is used to investigate). An
example of the first of these involves using cognitive causal maps, which involve diagrams
representing the network of causal assertions that the subject is making. There are various
ways of setting about discourse analysis, ranging from a fairly impressionistic listing of instances
of the phenomenon (such as the heroes whose mothers died when the hero was young), to
very formal approaches such as story grammars. A common criticism of content analysis is that
it is often a laborious way of gathering evidence to support what is expected to be found in the
first place. In such cases, you can end up preaching to the converted.

Discourse analysis studies the way that people communicate with each other through
language within a social setting. Language is not a neutral medium for transmitting information;
it is bedded in our social situation and helps to create and recreate it. Language shapes our
perception of the world, our attitudes and identities. Two central themes can be identified: the
interpretive context in which the discourse is set, and the rhetorical organization of the discourse.
The former concentrates on analyzing the social context, for example the power relations between
the speakers (perhaps due to age or seniority) or the type of occasion where the discourse
takes place (a private meeting or at a party). The latter investigates the style and scheme of the
argument in the discourse, for example a sermon will aim to convince the listener in a very
different way from a lawyer’s presentation in court. Even discourse analysts with backgrounds
175

in psychology do not offer a united front on its nature. Different viewpoints exist partly because
they draw on different intellectual roots. Consequently, one needs to be aware that there is no
consensus position which can be identified as the core of discourse analysis. This would help
heighten and facilitate the theoretical debate, it is not a criticism.

14.2.1 THE AGENDA OF DISCOURSE ANALYSIS

Discourse analysis is not simple to learn, as it is a readily applied technique. Discourse


analysis is a body of theory and knowledge accumulated over a period of 50 years or more. It is
not even a single, integrated theory. Instead, it provides an analytical focus for a range of
theories contributed by a variety of disciplines such as philosophy, linguistics, sociology and
anthropology. Psychology as somewhat a latecomer to the field is in the process of setting its
own distinctive stamp on discourse analysis. The agenda of discourse analysis is to re-stress
the point that there are no short cuts to successful discourse analysis. Intellectual and theoretical
roots are more apparent in the writings of discourse analysts than in any other field of psychology.
In other words, the most important practical step in using discourse analysis is to immerse
oneself in its theoretical and research literature. One cannot just get on with doing discourse
analysis. Without understanding in some depth of the constituent parts of the discourse analytic
tradition, the objectives of discourse analysis cannot be appreciated fully. This would be equally
true of designing an experiment – we need to understand what it does, how it does it, why it
does it, and when it is inappropriate

The agenda of psychological discourse analysis according to Potter and Wetherell (1995)
includes the following:

Discourse analysis is not simply an approach to the social use of language. It focuses on
discourse practices – the things that people do in talk and writings. But it also focuses on the
resources that people employ when achieving these ends.

Construction and description during conversation and other forms of text, people create
and construct ‘versions’ of the world. Discourse analysis attempts to understand and describe
this constructive process.

Content Talk and other forms of discourse are regarded as the important site of
psychological phenomena. No attempt is made to suugest the ‘underlying’ psychological
mechanism to explain that talk. So racist talk is regarded as the means by which discrimination
176

is put into practice. There is no interest in ‘psychological mechanisms’ such as authoritarian


personalities or racist attitudes.

Rhetoric Discourse analysis is concerned with how talk can be organised so as to be


argumentatively successful or persuasive.

Stake and accountability are that which people regard others as having a vested interest
(stake) in what they do. Hence they impute motives to the actions of others which may justify in
dismissing what they say.

Discourse analysis actively rejects the use of cognitive concepts such as traits, motives,
attitudes and memory stores. Instead, it focuses on the text by emphasising, for example, how
memory is socially constructed by people, such as when reminiscing over old photographs.
This agenda might be considered a broad framework for psychological discourse analysis. It
points to aspects of text which the analyst may take into account. At the same time, the hostility
of discourse analysis to traditional forms of psychology is apparent.

14.2.2 DOING DISCOURSE ANALYSIS

It cannot be stressed too much that the objectives of discourse analysis are limited in a
number of ways – especially the focus on the socially interactive use of language. In a nutshell,
there is little point in doing a discourse analysis to achieve ends not shared by discourse analysis.
Only when the researcher is interested in language as social action is discourse analysis
appropriate. Some discourse analysts have contributed to the confusion by offering it as a
radical and new way of understanding psychological phenomena. This, taken superficially, may
imply that discourse analysis supersedes other forms of psychology. It is more accurate to
suggest that discourse analysis performs a different task. A discourse analysis would be useless
for biological research on genetic engineering but an excellent choice to look at the ways in
which the moral and ethical issues associated with genetic engineering are dealt with in speech
and conversation. There is no single recipe for doing discourse analysis. Different kinds of
studies involve different procedures, sometimes working intensively with a single transcript,
other times drawing on a large corpus. Analysis is a craft that can be developed with different
degrees of skill. It can be thought of as the development of sensitivity to the occasioned and
action-oriented, situated, and constructed nature of discourse. Nevertheless, there are a number
of ingredients which, when combined together are likely to produce something satisfying.
177

14.3 SUMMARY

One of the important points that can be understood is that narrative analysis which will
help one to extract themes, structures, interactions and performances from stories or accounts
that people use to explain their past, present or their interpretations of events. Similarly, discourse
analysis is one that helps to bring to psychology a theoretically fairly coherent set of procedures
for the analysis of the very significant amounts of textual material which forms the basis of
much psychological data. It takes care of the fact that language is action and is designed to do
something and not represent something. Discourse analysis is primarily of use to researchers
who wish to study language as an active thing.

14.4 KEY WORDS

Narrative analysis: Narrative analysis extracts themes, structures, interactions and


performances from stories or accounts that people use to explain their past, present or their
interpretations of events.

Discourse analysis: Discourse analysis studies the way that people communicate with
each other through language within a social setting.

Rhetoric Discourse analysis: Rhetoric Discourse analysis is concerned with how talk
can be organised so as to be argumentatively successful or persuasive.

14.5 CHECK YOUR PROGRESS

1. __________ extracts themes, structures, interactions and performances from stories

2. The agenda of discourse analysis is to ________ the point that there are no short cuts
to successful discourse analysis.

3. Discourse analysis actively rejects the use of ___________concepts such as traits,


motives, attitudes and memory stores.

4. What are the tools of narrative analysis?


178

14.6 ANSWERS TO CHECK YOUR PROGRESS

1. Narrative analysis

2. Re-stress

3. Cognitive

4. Path dependency, periodization and historical contingency

14.7 MODEL QUESTIONS

1. Discuss the emergence and the method of doing narrative analysis.

2. Describe the significance of discourse analysis and the way of doing it.

REFERENCES

Given, L.M. (Ed.). (2008). The SAGE Encyclopedia of Qualitative Research Methods
(Vol 1 & 2). New Delhi: Sage Publications India Pvt. Ltd.

Kramp, M.K. (2004). Exploring Life and Experience Through Narrative Inquiry. In K.
deMarris & S.D. Lapan (Eds.), Foundations for Research Methods of Inquiry in Education and
the Social Sciences (103-122). NJ: Lawrence Erlbaum Associates, Inc., Publisher

Neuman, W. L. (2014). Social Research Methods: Qualitative and Quantitative Approaches.


New York, NY: Pearson Education Ltd.

Rugg, G., & Petre, M. (2007). A Gentle Guide to Research Methods. NY: McGraw Hill
Education.
179

LESSON - 15
REPORT WRITING FOR A JOURNAL AND THESIS/
DISSERTATION
INTRODUCTION

Research is not just about data collection and analysis. The major purpose is to advance
understanding of the subject matter, that is, to develop theory, concepts and information about
psychological processes. The research report describes the role that a particular study plays in
this process. Although there is a common structure which facilitates the comprehension of
research reports and the absorption of the detail contained therein, this structure should be
regarded as flexible enough to cope with a wide variety of contingencies. In this lesson let us
look into the format of a report written either for a journal article or a thesis/ dissertation.

OBJECTIVES OF THE LESSON

After studying this lesson you will be able to:

 To understand the importance of report

 To understand the subheadings to be written in any report

 To identify the content to be written under each subheading

 To know the format of references to be written at the end of the research work

PLAN OF THE STUDY


15.1 Importance of Report Writing

15.2 Overall Structure

15.2.1 Title Page

15.2.2 Running Head and Page Number

15.2.3 Title

15.2.4 Author Name(s) (Byline) and Affiliation

15.2.5 Abstract

15.2.6 Introduction

15.2.7 Method
180

15.2.8 Results

15.2.9 Discussion

15.2.10References

15.3 Apa Citation Format

15.4 Overall Writing Style

15.5 Summary

15.6 Key Words

15.7 Check your Progress

15.8 Answers to Check your Progress

15.9 Model Questions

15.1 IMPORTANCE OF REPORT WRITING

Psychology is a diverse field of study so it should come as no surprise to find conflicting


ideas about what a research report should be. There are two main reasons why research
reports can be difficult to write:

The research report is complex with a number of different elements, each of which requires
different skills. The skills required when reviewing the previous theoretical and empirical studies
in a field are not the same as those involved in drawing conclusions from statistical data. The
skills of organising research and carrying it out are very different from the skills required to
communicate the findings of the research effectively.

When students first start writing research (laboratory) reports their opportunities to read
other research reports – such as journal articles – are likely to have been very limited. There is
a bit of a chicken-and-egg problem here. Until students have understood some of the basics of
psychological research and statistics they will find journal articles very difficult to follow. At the
same time, they are being asked essentially to write a report using much the same structure as
a journal article. Hopefully, some of the best students will be the next generation of professional
researchers writing the journal articles.
181

The American Psychological Association (APA) publishes a very substantial manual. There
is no universal manual for student research reports so, not surprisingly, different lecturers and
instructors have varying views of the detail of the structure and style of undergraduate student
research reports. Different universities have different requirements for doctoral theses, for
example. The problem goes beyond this into professional research reports. It is essential for a
student to understand the local rules for the research report just as it is for the professional
researcher to know what is acceptable to the journal to which they submit work for possible
publication. Probably students will receive advice from their lecturers and instructors giving
specific requirements. In this chapter, we have opted to use the style guidelines of the American
Psychological Association wherever practicable. This helps prepare students for a possible
future as users and producers of psychological research. It should be remembered that research
is increasingly regarded as an important skill for practitioners of all sorts and not just academic
psychologists.

15.2 OVERALL STRUCTURE

A psychological research report normally consists of the following sections:

Title page: This is the first page and contains the title, the author and author details such
as their address, e-mail address, telephone and fax number.

Abstract: This is the second page of the report and you may use the subheading ‘Abstract’
for clarity. The abstract is a detailed summary of the contents of the report.

Title: This is another new page – the title is repeated from the first page but no details as
to authorship are provided. This is to make it easier for editors to send out the manuscript for
anonymous review by other researchers.

Introduction: This continues on the same page but normally the subheading ‘Introduction’
is omitted.

Method: This consists of the following sections at a minimum: participants, materials,


measures or apparatus, Design and procedure.

Results: This includes statistical analyses, tables and diagrams.

Discussion: This goes into a detailed explanation of the findings presented under results.
It can be quite conjectural.
182

Conclusion: Usually contained within the discussion section and not a separate
subheading. Nevertheless, sometimes conclusions are provided in a separate section.

References: One usually starts a new page for these. It is an alphabetical (then
chronological if necessary) list of the sources that one has cited in the body of the text.

Appendices: This is an optional section and is relatively rare in professional publications.


Usually it contains material which is helpful but would be confusing to incorporate in the main
body of the text.

This is the basic, standard structure which underlies the majority of research reports.
However, sometimes other sections are included where appropriate. Similarly, sometimes
sections of the report are merged if this improves clarity.

Let us look into the elaborate details of each sub heading.

15.2.1 Title Page

The title page is the first page of the manuscript and contains, in order from top to bottom
of the page, the running head and page number, the title of the paper, the author names and
affiliations, and author note.

15.2.2 Running Head and Page Number

The first line of the title page is the running head and the page number 1. The running
head is a complete, but abbreviated, title that contains a maximum of 50 characters, including
spaces and punctuation. On the title page, the running head begins at the left margin with the
phrase, Running head: followed by the abbreviated title, all in capital letters. The page number
appears at the right margin. An example of a running head and page number on a title page
would appear as follows: Running head: PEER PRESSURE AND SMOKING BEHAVIOUR.
The running head (without the phrase Running head typed out) and page number run
consecutively on every page of the manuscript. An example of a running head and page number
on all subsequent pages after the title page would appear as follows: PEER PRESSURE AND
SMOKING BEHAVIOUR. The pages are numbered consecutively, starting with the title page,
so that the manuscript can be reassembled if the pages become mixed, and to allow editors
and reviewers to refer to specific items by their page number. To have the running head and
page number appear on each page of the manuscript, generate them using headers in a word-
183

processing program. Do not manually type this information in on each page. In a published
article, the running head appears at the top of the pages to identify the article for the readers.

15.2.3.Title

The title, typed in upper and lower case letters, is positioned in the upper half of the page
centered between the left and right margins. It is recommended that a title be no more than 12
words in length. The title should be a concise statement that describes your study as accurately
and completely as possible. It should identify the main variables or theories, and the relationships
being investigated. Keep in mind that the words used in the title are often the basis for indexing
and referencing your paper. Also remember that the title gives the first impression of your paper
and often determines whether an individual reads the rest of the article.

Following are some general guidelines for writing a title: 1. Avoid unnecessary words. It is
tempting to begin your title with “A study of” or “The relationship between.” However, these
phrases usually do not add any useful information and can be deleted with no negative
consequences. 2. If possible, the first word in the title should be of special relevance or importance
to the content of the paper. If your main topic concerns gender stereotypes, try to begin your
title with “Gender stereotypes.” Again, your title gives the first impression of the article and the
first few words provide the first impression of the title. 3. Avoid cute or catchy titles.

15.2.4 Author Name(s) (Byline) and Affiliation

Immediately following the title, centered on the next double-spaced lines, are the author’s
name(s), followed by the institution(s) where each researcher was when the research was
conducted (without the words by or from). If there are multiple authors, the order of the names
is usually significant; the first author listed is typically the individual who made the primary
contribution to the research, and the remaining authors are listed in descending order of their
contributions. The author note is placed on the title page, several lines below the title, byline,
and affiliation. The words Author Note are centered on one line with the paragraphs comprising
the author note beginning on the next double spaced line. Typically the author note contains
four paragraphs, each paragraph starting with an indent, that provide details about the authors,
including:

• Departmental affiliation.

• Changes in affiliation (if any) since the time that the research was conducted.
184

• Acknowledgements of sources of financial support for the research (if any), and
recognition of others who contributed or assisted with the study (if any). Disclosure
of special circumstances (if any).

• Identification of a contact person if a reader wants further information. This allows


editors to create a completely anonymous manuscript by simply removing the title
page. The anonymous manuscript can then be forwarded to reviewers who will not
be influenced by the author’s reputation but can give an unbiased review based
solely on the quality of the research study.

15.2.5 Abstract

The abstract is a concise summary of the paper that focuses on what was done and what
was found. The abstract appears alone on page 2 of the manuscript. The word Abstract is
centered at the top of page 2, and the one-paragraph summary starts on the next double-
spaced line with no paragraph indentation. Although the abstract appears on page 2 of your
manuscript, the abstract typically is written last, after the rest of the paper is done. With the
possible exception of the title, the abstract is the section that most people read and use to
decide whether to seek out and read the entire article. For most journals the word limit for an
abstract ranges from 150 to 250 words. It should be a self-contained summary that does not
add to or evaluate the body of the paper. The abstract of an empirical study should include the
following elements, not necessarily in this order.

1. A one-sentence statement of the problem or research question

2. A brief description of the subjects or participants (identifying how many and any relevant
characteristics)

3. A brief description of the research method and procedures

4. A report of the results

5. A statement about the conclusions or implications

15.2.6 Introduction

The first major section of the body or text of a research report is the introduction. The
introduction provides the background and orientation that introduces the reader to your research
185

study. The introduction should identify the question or problem that your study addresses, and
explain why the problem is important; it should explain how you arrived at the question from the
previous research in the area; it should identify the hypotheses and how they relate to the
research design; and it should explain the implications of the study. A good introduction should
address these issues in a few pages. The introduction begins on page 3 of your manuscript. It
is identified by centering the title of the article (exactly as it appears on the title page) at the top
of the page. The first paragraph of the introduction begins with a paragraph indentation on the
next double-spaced line.

1. Typically, this section begins with a general introduction to the topic of the paper. In a
few sentences or paragraphs, describe the issue investigated and why this problem is important
and deserves new research. 2. Next is a review of the relevant literature. You do not need to
review and discuss everything that has been published in the area, only the articles that are
directly relevant to your research question. Discuss only relevant sections of previous work.
Identify and cite the important points along the way, but do not provide detailed descriptions.
The literature review should not be an article-by-article description of one study after another;
instead, the articles should be presented in an integrated manner. Taken together, your literature
review should provide a rationale for your study. 3. Ultimately, the introduction reaches the
specific problem, hypothesis, or question that the research study addresses. State the problem
or purpose of your study, and clearly define the relevant variables. The review of the literature
should lead directly to the purpose of or the rationale for your study. 4. Describe the research
strategy that was used to evaluate your hypothesis or to obtain an answer to your research
question. Briefly outline the methodology used for the study (the details of which are provided in
the next section of the report, the method section). At this point, simply provide a snapshot of
how the study was conducted so the reader is prepared for the upcoming details. Also explain
how the research strategy provides the information necessary to address your hypothesis or
research question.

If the introduction is well written, your readers will finish the final paragraphs with a clear
understanding of the problem you intend to address, the rationale that led to the problem, and
a basic understanding of how you answered the problem.

15.2.7 Method

The second major section of the body or the text of a research report is the method
section. The method section provides a relatively detailed description of exactly how the variables
186

were defined and measured and how the research study was conducted. Other researchers
should be able to read your method section and obtain enough information to determine whether
your research strategy adequately addresses the question you hope to answer. It also allows
other researchers to duplicate all of the essential elements of your research study. The method
section immediately follows the introduction. Do not start a new page. Instead, after the last line
of the introduction, on the next doublespaced line, type the word Method, centered and in
boldface. Usually, a method section is divided into two subsections: Subjects or Participants,
and Procedure. Each subsection heading is presented at the left margin in boldface with
uppercase and lowercase letters. The first major subsection of the method section is either the
subjects subsection (for nonhumans) or the participants subsection (for humans). This subsection
describes the sample that participated in the study. The second major subsection of the method
section is the procedure subsection. The procedure subsection provides a description of the
step-by-step process used to complete the study. If portions of your study are complex or
require detailed description, additional subsections can be added. One example is entitled
either Apparatus or Materials. This subsection describes any apparatus (equipment) or materials
(questionnaires and the like) used in the study. Occasionally, both subsections are included in
a research report. The materials subsection includes identification of the variables and how
they were operationalized; that is, how they were defined and measured. Each questionnaire
used in the study requires a description, a citation, and an explanation of its function in the
study (what it was used to measure). Also include information on the instrument’s psychometric
properties (evidence of reliability and validity). For a new questionnaire that you developed for
the purposes of your study, it is also necessary to provide a copy of the measure in an appendix.

15.2.8 Results

The third major section of the body or text of the research report is the results section.
The results section presents a summary of the data and the statistical analyses. The results
section immediately follows the method section. Do not start a new page. Instead, after the last
line of the method section, on the next double-spaced line, type Results, centered and in boldface.
The first paragraph in the results section is indented and begins on the next double-spaced line.
The results section simply provides a complete and unbiased reporting of the findings, just the
facts, with no discussion of the findings. Usually, a results section begins with a statement of
the primary outcome of the study, followed by the basic descriptive statistics (usually means
and standard deviations), then the inferential statistics (usually the results of hypothesis tests),
and finally the measures of effect size. If the study was relatively complex, it may be best to
187

summarize the data in a table or a figure. However, with only a few means and inferential tests,
it usually is more practical to report the results as text. Figures and tables are numbered (for
example, Table 1 or Figure 1), and are referred to by number in the text. Reports of statistical
significance should be made in a statement that identifies (1) the type of test used, (2) the
degrees of freedom, (3) the outcome of the test, (4) the level of significance, and (5) the size
and direction of the effect. When reporting the level of significance, you are encouraged to use
the exact probability value (as provided by most computer programs), or you may use a traditional
alpha level (.05, .01, .001) as a point of reference. The results section of a research report
presents a summary of the data and the statistical analysis.

15.2.9 Discussion

The fourth and final major section of the body or text of a research report is the discussion
section. In the discussion section, you offer interpretation, evaluation, and discussion of the
implications of your findings. The discussion section immediately follows the results section. Do
not start a new page; instead, after the last line of the results section, on the next double-
spaced line, type Discussion, centered and in boldface. The first paragraph of the discussion
section is indented and begins on the next double-spaced line. The discussion section should
begin with a restatement of the hypothesis. (Recall that your hypothesis is first presented at the
end of the introduction.) Next, briefly restate your major results, and indicate how they either
support or fail to support your primary hypothesis. Note that the results are described in a
sentence format without repeating all the numerical statistics that appear in the results section.
Next, relate your results to the work of others, explaining how your outcome fits into the existing
structure of knowledge of the area. It is also common to identify any limitations of the research,
especially factors that affect the generalization of the results. It can be helpful to think of the
discussion section as a mirror image of the introduction. Remember, the introduction moved
from general to specific, using items from the literature to focus on a specific hypothesis. Now,
in the discussion section, you begin with a specific hypothesis (your outcome) and relate it back
to the existing literature. Do not simply repeat statements from the introduction, but you may
find it useful to mention some of the same references you used earlier to make new points
relating your results to the other work. In the last paragraphs of the discussion section, you may
reach beyond the actual results and begin to consider their implications and/or applications.
Your results may support or challenge existing theories, suggest changes in practical, day-to-
day interactions, or indicate new interpretations of previous research results. Any of these is an
appropriate topic for a discussion section, and each can lead to new ideas for future research.
188

If your results support your original hypothesis, it is now possible to test the boundaries of your
findings by extending the research to new environments or different populations. If the research
results do not support your hypothesis, then more research is needed to find out why. It is
common, at the end of the discussion section, to pose problems that remain unsolved as the
result of the findings of the study. This never-ending process of asking questions, gathering
evidence, and asking new questions is part of the general scientific method. The answer to a
research question is always open to challenge. The discussion section of a research report
restates the hypothesis, summarizes the results, and then presents a discussion of the
interpretation, implications, and possible applications of the results.

Self learning exercise

Download one research article and see whether all the above mentioned subheadings
are written as per the criteria.

15.2.10 References

Beginning on a new page, with the centered title, References, the reference section
provides complete information about each item cited in the manuscript. Notice that there is a
precise one-to-one relationship between the items listed in the references and the items cited in
the paper. Each item cited must appear in the references, and each item in the references must
have been cited in the body of the report. The references are listed alphabetically by the last
name of the first author. One-author entries precede multiple-author entries beginning with the
same first author. References with the same author or authors in the same order are listed
chronologically from oldest to most recent publication date.

Unlike the method, results, and discussion sections, the references section starts on a
new page and consists of an alphabetized (by author) list of all of the sources cited in the lab
report. Each item uses a ‘‘hanging indent,’’ which makes it easier to find specific authors when
skimming through a reference page.

A journal article with one author

Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American
Psychologist, 64, 1–11.

Note: American Psychologist publishes twelve issues per year. A common mistake is to
include, along with the volume number (64,in this case), the issue number (1 in this case). It
189

would look like this: 64(1). However, the only time the issue number is used is when each issue
begins with page 1. Almost all journals number their pages consecutively throughout the volume,
though (e.g., issue 2 might start with page 154, for example), so adding the issue number adds
unnecessary information.

A journal article with more than one author

Hall, J. A., & Veccia, E. M. (1990). More ‘‘touching’’ observations: New insights on men,
women, and interpersonal touch. Journal of Personality and Social Psychology, 59, 1155–1162.

A magazine

Palmer, J. D. (1982, October). Biorhythm bunkum. Natural History, 90–97.

A book that is a first edition

Kimmel, A. J. (1996). Ethical issues in behavioral research: Basic and applied perspectives.
Malden, MA: Blackwell.

A book that is not a first edition

Kimmel, A. J. (2007). Ethical issues in behavioral research: Basic and applied perspectives
(2nd ed.). Malden, MA: Blackwell.

A chapter from an edited book

Weiss, J. M. (1977). Psychological and behavioral influences on gastrointestinal lesions


in animal models. In J. D. Maser & M. E. P. Seligman (Eds.), Psychopathology: Experimental
models (pp. 232–269). San Francisco: Freeman.

Electronic sources. Rules for citing references from websites, electronic databases, e-
mail, and so on, have been evolving in recent years and are frequently updated. For the most
recent set of guidelines, consult the page on APA’s website that is dedicated to this topic:
www.apastyle.org/elecref.html. One final point, and mistakes are often made here, is that before
turning in a lab report, you should check the citations in the body of your paper against the list
in the References section. Make sure that (a) every source mentioned in the text of your report
is given a listing in the References section of the paper, and (b) every reference listed in the
References section is cited somewhere in the text of your report.
190

Self learning exercise

Write a research article after collecting a small number of sample based on whatever who
have studied in Research Methodology I and Research Methodology II and write the references
properly according to the format given above.

15.3 APA CITATION FORMAT

When reviewing past research related to the problem at hand, you will need to furnish
citations for the studies mentioned. Sometimes the author’s name will be part of the sentence
you are writing. If so, the date of publication follows the name and is placed in parentheses. For
example: In a study by Smith (1990), helping behavior declined ... If the author’s name is not
included in the sentence, the name, a comma, and the date are placed in parentheses. For
example: In an earlier study (Smith, 1990), helping behavior declined ... If a direct quote is
used, the page number is included. For example: Helping behavior declined when ‘‘participants
had difficulty determining whether the confederate had fallen onto the live wire’’ (Smith, 1990,
p. 23) or in the study by Smith (1990), helping behavior declined when ‘‘participants had difficulty
determining whether the confederate had fallen onto the live wire’’ (p. 23). Every work cited in
the introduction (or any other part of the manuscript) must be given a complete listing at the end
of the paper.

15.4 OVERALL WRITING STYLE

Clarity is essential since there is a great deal of information contained within a research
report. The material contained in the report should be geared to the major theme of the report.
This is particularly the case with the introduction in which the research literature is reviewed. It
is a bad mistake to simply review research in the chosen field and fail to integrate your choice
with the particular aspects addressed by your research. A number of stylistic points (as
summarised in Figure 5.2) should be remembered:

 Keep sentences short and as simple as possible. Sentences of eight to ten words
are probably the optimum.

 A lack of paragraphing makes a report difficult to read. Probably a paragraph should


be no more than about half a printed page. Equally, numerous one-sentence
paragraphs make the report incoherent and unreadable.

 It is useful to use subheadings (as well as the conventional headings). The reason
191

for this is that subheadings indicate precisely what should be under that subheading
– and report, you will benefit by having a report in which the material is in meaningful
order.

 Make sure that your sentences are in a correct and logical order. It is easy to get
sentences slightly out of order. The same is true for your paragraphing.

 It is normally inappropriate to use personal pronouns such as ‘I’ and ‘we’ in a research
report. However, care needs to be taken as this can lead to lengthy passive
sentences. In an effort to avoid ‘We gave the participants a questionnaire to complete.’
the result can be the following passive sentence: ‘Participants were given a
questionnaire to complete.’ It would be better to use a more active sentence structure
such as ‘Participants completed a questionnaire.’ This is shorter by far. In the active
sentence it is the subject that performs the action; for example, ‘We [subject] wrote
[verb] the report [object]’. In a passive sentence the subject suffers the action, as in
‘The report [subject] was written [verb]’.

 The dominant tense in the research report is the past tense. This is because the
bulk of the report describes completed activities in the past (for example, ‘The
questionnaire measured two different components of loneliness.’).

 Remember that the tables and diagrams included in the report need to communicate
as clearly and effectively as the text.

 Avoid racist and sexist language, and other demeaning and otherwise offensive
language about minority groups.

 The Numbers are expressed as 27, 3, 7, etc. in most of the text except where they
occur as the first words of the sentence. In this case, we would write; ‘Twenty-seven
airline pilots and 35 cabin crew completed the alcoholism scale.’

 It is a virtue to keep the report reasonably compact. Do not waffle or put in material
simply because you have it available. It is not desirable to exceed word limits so
sometimes material has to be omitted.

 Do not include quotations from other authors except in those cases where it is
undesirable to omit them. This is particularly the case when one wishes to dispute
what a previous writer has written.
192

 Generally introductions are the longest section of a research report. Some authorities
suggest about a third of the available space should be devoted to the introduction.
Of course, adjustments have to be made according to circumstances.

 A rule of thumb is to present the results of calculations to no more than two decimal
places. There is a danger of spuriously implying a greater degree of accuracy than
psychological data usually possess. Whatever you do, be consistent.

 Psychological terms may not have a standard definition which is accepted by all
researchers. Consequently, you may find it necessary to define how you are using
terms in your report.

 Layout: normally the recommendation is to double space your work and word-process
it. However, check local requirements on this. Leave wide margins for comments.
Use underlining or bold for headings and subheadings.

15.5 SUMMARY

Eventhough the research report seems to be simple; it needs much attention as it helps
to maintain uniformity and clarity in taking the results to the outside world. Inspite of carrying out
a rigorous and meticulous research work, if the report is not proper, the significance is
questionable. Moreover, the uniformity only brings integrity in research work. Therefore, one
needs to follow one particular format to report a research work. In psychology, we generally for
APA format.

15.6 KEYWORDS

Title: The title should be a concise statement that describes your study as accurately and
completely as possible.

Running head: The running head is a complete, but abbreviated, title that contains a
maximum of 50 characters, including spaces and punctuation.

Abstract: The abstract is a concise summary of the paper that focuses on what was done
and what was found.

Method: The method section provides a relatively detailed description of exactly how the
variables were defined and measured and how the research study was conducted.
193

15.7 CHECK YOUR PROGRESS

1. The first line of the title page is the _________

2. It is recommended that a title be no more than _________ in length.

3. The __________ listed is typically the individual who made the primary contribution to
the research.

4. The first major subsection of the method section is either the _________ subsection
(for nonhumans) or the _________ subsection (for humans).

5. Figures and tables are numbered, and are referred to by ________ in the text.

15.8 ANSWERS TO CHECK YOUR PROGRESS

1. Running head

2. 12 words

3. First author

4. Subjects and Participants

5. Number

15.9 MODEL QUESTIONS


1. Write the APA format of references for different sources.

2. Under each subheading of any research report, write the content to be written and
its importance.

REFERENCES

American Psychological Association. (2010). Publication manual of the American


Psychological Association. Washington, DC: American Psychological Association.

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. London,
England: Routledge.
194

Cozby, P. C., & Bates, S. C. (2015). Methods in Behavioural Research (12th ed). New
York, NY: McGraw Hill Education.

Goodwin, J. C. (2010). Research in Psychology: Methods and Design. (6th ed). Hoboken,
NJ: John Wiley & Sons.

Gravetter, F. J., & Forzano, L-A. B. (2012). Research Methods for the Behavioural Sciences
(4th ed). Belmont, CA: Wadsworth Cengage Learning.Howitt, D., & Gramer, D. (2011).
Introduction to Research Methods in Psychology. Harlow, Essex: Pearson Education Inc.

Howitt, D., & Cramer, D. (2011). Introduction to Research Methods in Psychology. England:
Pearson Education Ltd.

Leary, M. R. (2001). Introduction to Behavioural Research (3rd ed). Needham Heights,


MA: Allyn and Bacon.

Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2012). Research Methods


in Psychology (9th ed). New York, NY: McGraw Hill Education.
195

MODEL QUESTION PAPER

MSC - PSYCHOLOGY

RESEARCH METHODOLOGY II

Time: 3 hours Max marks: 80


Part A (10 x 2 = 20)
Answer any TEN questions in 50 words each

1. What is a quantitative data?

2. What is called operationalization?

3. What is meant by debriefing?

4. What is an exploratory research?

5. Write a note on concept mapping?

6. What is theoretical sampling?

7. Write a note on time series design.

8. What is meant by trustworthiness?

9. State the difference between population and sample.

10. What is the use of regression analysis?

11. When do you use a non-parametric statistics?

12. Write an example for the reference of a journal article with five authors.

Part B (5 x 5 = 25)

Answer any Five questions in 250 words each

13. Write an essay on when to use qualitative and quantitative research methods.

14. Delineate the history of qualitative and quantitative research in psychology.

15. How to formulate hypothesis using exploratory research?

16. Write an essay on various methods or sources of data for qualitative research.

17. State the assumptions of one way and two way ANOVA.
196

18. Write a detailed essay on any two methods of qualitative data analysis.

19. Highlight on the overall structure of a psychology report.

Part C (3 x 10 = 30)

Answer any Three questions in 500 words each

20. Write the ten steps to carry out any qualitative research.

21. Write about the ethical guidelines and ethical codes according to APA.

22. Write in detail the four types of mixed methods research.

23. Write in detail about the single subject design with its advantages and disadvantages.

24. Compare and contrast the various probability sampling techniques.

S-ar putea să vă placă și