Sunteți pe pagina 1din 118

Introduction to Marketing Research

The function that links the consumer, the customer, and


public to the marketer through INFORMATION

American Marketing Association


Used to identify and define
market opportunities and
problems

Generate, refine, and


evaluate marketing
performance

Monitor marketing
performance

Improve understanding of
marketing as a process
Marketing research is the systematic and objective
 identification
 collection
 analysis
 dissemination
 and use of information
For the purpose of improving decision making related to the
 identification and
 solution of problems and opportunities in marketing
Classification of Marketing Research

Problem-Identification Research
• Research undertaken to help identify problems which are
not necessarily apparent on the surface and yet exist or
are likely to arise in the future. Examples: market
potential, market share, image etc.

Problem-Solving Research
• Research undertaken to help solve specific marketing
problems. Examples: segmentation, product, pricing,
promotion, and distribution research.
The Role of Marketing Research

Customer Groups
• Consumers
• Employees
• Shareholders
• Suppliers
Uncontrollable
Controllable Environmental
Marketing Factors
Variables
Marketing • Economy
• Product
Research • Technology
• Pricing
• Laws &
• Promotion Regulations
• Distribution • Social & Cultural
Assessing
Providing
Marketing Factors
Information Decision • Political Factors
Information
Needs Making

Marketing Managers

• Market Segmentation
•Target Market Selection
• Marketing Programs
• Performance & Control
Marketing Research Process
Step 1 : Problem Definition

Step 2 : Development of an Approach to the Problem

Step 3 : Research Design Formulation

Step 4 : Fieldwork or Data Collection

Step 5 : Data Preparation and Analysis

Step 6 : Report Preparation and Presentation


Defining the Problem

•Discussions with Decision Makers

•Interviews with Industry Experts

•Secondary Data Analysis

•Qualitative Research
Environmental Context of the Problem

•Past Information and Forecasts

•Resources and Constraints

•Objectives

•Buyer Behaviour

•Legal Environment

•Economic Environment

•Marketing and Technological Skills


• Management Decision Problem and Marketing
Research Problem

• Approach to the Problem: Components


-Objective/ Theoretical Foundations
-Analytical Model
-Research Questions
-Hypotheses
-Specification of Information Needed
Research Questions and Hypotheses

• Research questions (RQs) are refined


statements of the specific components of
the problem.
• A hypothesis (H) is an unproven
statement or proposition about a factor
or phenomenon that is of interest to the
researcher. Often, a hypothesis is a
possible answer to the research question.
Research Design

A research design is a framework or blueprint


for conducting the marketing research project.
It details the procedures necessary for
obtaining the information needed to structure
or solve marketing research problems.
Components of a Research Design
• Define the information needed

• Design the exploratory, descriptive, and/or causal


phases of the research

• Specify the measurement and scaling procedures

• Construct and pretest a questionnaire or an


appropriate form for data collection

• Specify the sampling process and sample size

• Develop a plan of data analysis


Classification of Marketing Research Designs

Research Design

Exploratory Conclusive
Research Design Research Design

Descriptive Causal
Research Research

Cross-Sectional Longitudinal
Design Design

Single Cross- Multiple Cross-


Sectional Design Sectional Design
Data Sources
Primary Vs. Secondary Data

• Primary data are originated by a researcher for


the specific purpose of addressing the problem at
hand.
• Secondary data are data that have already been
collected for purposes other than the problem at
hand. These data can be located quickly and
inexpensively.
Criteria for Evaluating Secondary Data
i. Specifications: Methodology used to collect the
data
ii. Error: Accuracy of the data
iii. Currency: When the data were collected
iv. Objective(s): The purpose for which the data
were collected
v. Nature: The content of the data
vi. Dependability: Overall dependability of the data
Classification of Secondary Data

1. Internal Secondary Data

2. Published External Secondary Sources

3. Computerized Databases

4. Syndicate Sources
Uses of Secondary Data

• Identify the problem


• Better define the problem
• Develop an approach to the problem
• Formulate an appropriate research design
• Answer certain research questions and test some
hypotheses
• Interpret primary data more insightfully
Exploratory Research Design:
Qualitative Research
A Classification of Qualitative Research
Procedures
Qualitative Research
Procedures

Direct (Non- Indirect


disguised) (Disguised)

Projective
Depth Interviews Techniques
Focus Groups

Association Completion Construction Expressive


Techniques Techniques Techniques Techniques
Focus Groups

• Goal of focus group research: learn and understand


what people have to say and why.
• Find out how participants feel about a product,
concept, idea, organization, etc.;
• How it fit into their lives;
• Their emotional involvement with it

• May be conducted alone or as part of a broader


project

• May be use to define issues or to confirm findings


from survey research.
Characteristics of Focus Groups

Group Size 8-12

Group Composition Homogeneous, respondents,


prescreened

Physical Setting Relaxed, informal atmosphere

Time Duration 1-3 hours

Recording Use of audiocassettes and


videotapes

Moderator Observational, interpersonal, and


communication skills of the
moderator
Key Qualifications of Focus Group Moderators

1. Kindness with firmness


2. Permissiveness
3. Involvement
4. Encouragement
5. Flexibility
6. Sensitivity
Requirements for Focus Groups
• Good group of information-rich participants
• How many people?
• How many groups?
• Characteristics of participants

• Discussion guide and outline


• Ground rules
• Agenda
• Guiding questions

• Qualified Moderator
• Controls flow
• Stimulates discussion

• Analysis and Report


Variations in Focus Groups
• Two-way focus group: This allows one target group to listen
to and learn from a related group. For example, a focus
group of physicians viewing a focus group of arthritis
patients discussing the treatment they desire.

• Dual-moderator group: A focus group conducted by two


moderators: One moderator is responsible for the smooth
flow of the session, and the other ensures that specific
issues are discussed.

• Dueling-moderator group: There are two moderators, but


they deliberately take opposite positions on the issues to be
discussed.

• Respondent-moderator group: The moderator asks selected


participants to play the role of moderator temporarily to
improve group dynamics.
• Advantages and Disadvantages

• Online versus Traditional Focus Groups


Depth Interviews

• One-on-one interviews that probe and elicit


detailed answers to questions, often using
nondirective techniques to uncover hidden
motivations.

• Advantages
• No group pressure
• Respondent is focus of attention and feels important
• Respondent is highly aware and active
• Long time period encourages revealing new information
• Can probe to reveal feelings and motivations
• Discussion is flexible
Depth Interview Techniques
Laddering:
In laddering, the line of questioning proceeds from product
characteristics to user characteristics. This technique allows
the researcher to tap into the consumer's network of
meanings.

Hidden Issue Questioning:


In hidden issue questioning, the focus is on personal “sore
spots;” not on general lifestyles but on deeply felt personal
concerns.

Symbolic Analysis:
Symbolic analysis attempts to analyze the symbolic
meaning of objects by comparing them with their opposites.
The logical opposites of a product that are investigated are:
non-usage of the product, attributes of an imaginary “non-
product,” and opposite types of products.
Definition of Projective Techniques

• An unstructured, indirect form of questioning that


encourages respondents to project their underlying
motivations, beliefs, attitudes or feelings regarding the
issues of concern.
• In projective techniques, respondents are asked to
interpret the behavior of others.
• In interpreting the behavior of others, respondents
indirectly project their own motivations, beliefs,
attitudes, or feelings into the situation.
Projective Techniques

(1) Word association


(2) Completion techniques
(3) Construction
(4) Expressive techniques
Advantages of Projective Techniques

• They may elicit responses that subjects would be


unwilling or unable to give if they knew the purpose of
the study.

• Helpful when the issues to be addressed are personal,


sensitive, or subject to strong social norms.

• Helpful when underlying motivations, beliefs, and


attitudes are operating at a subconscious level.
Disadvantages of Projective Techniques

• Suffer from many of the disadvantages of unstructured


direct techniques, but to a greater extent.
• Require highly-trained interviewers.
• Skilled interpreters are also required to analyze the
responses.
• There is a serious risk of interpretation bias.
• They tend to be expensive.
• May require respondents to engage in unusual behavior.
Guidelines for Using Projective Techniques

• Projective techniques should be used because the


required information cannot be accurately obtained by
direct methods.
• Projective techniques should be used for exploratory
research to gain initial insights and understanding.
• Given their complexity, projective techniques should
not be used naively.
Analysis of Qualitative Data

• Data reduction – Select which aspects of the data are


to be emphasized, minimized, or set aside for the
project at hand.

• Data display – Develop a visual interpretation of the


data with the use of such tools as a diagram, chart, or
matrix. The display helps to illuminate patterns and
interrelationships in the data.

• Conclusion drawing and verification – Consider the


meaning of analyzed data and assess its implications
for the research question at hand.
Descriptive Research Design: Survey and
Observation
A Classification of Survey Methods

Survey
Methods

Telephone Personal Mail Electronic

In-Home Mall Computer-Assisted E-mail Internet


Intercept Personal
Interviewing

Traditional Computer-Assisted
Mail Mail
Telephone Telephone
Interview Panel
Interviewing
Criteria for Evaluating Survey Methods
1. Task Factors
Diversity of Questions and Flexibility of Data Collection
• This depends upon the degree of interaction the respondent
has with the interviewer and the questionnaire, as well as the
ability to actually see the questions.
Use of Physical Stimuli
• The ability to use physical stimuli such as the product, a
product prototype, commercials, or promotional displays during
the interview.
Sample Control
• Sample control is the ability of the survey mode to reach the
units specified in the sample effectively and efficiently.
Quantity of Data
• The ability to collect large amounts of data.
Response Rate
• Survey response rate is broadly defined as the percentage of
the total attempted interviews that are completed.
2. Situational Factors

Control of the Data Collection Environment


• The degree of control a researcher has over the environment in
which the respondent answers the questionnaire.
Control of Field Force
• The ability to control the interviewers and supervisors involved in
data collection.
Potential for Interviewer Bias
• The extent of the interviewer's role determines the potential for bias.
Speed
• The total time taken for administering the survey to the entire
sample.
Cost
• The total cost of administering the survey and collecting the data.
3. Respondent Factors

Perceived Anonymity
• Respondents' perceptions that their identities will not be
discerned by the interviewer or the researcher.
Social Desirability/Sensitive Information
• Tendency of the respondents to give answers that are socially
acceptable, whether or not they are true.
Low Incidence Rate
• Incidence rate refers to rate of occurrence of persons eligible
to participate in the study.
Respondent Control
• Methods that allow respondents control over the interviewing
process will solicit greater cooperation and are therefore
desirable.
Observation Methods

1. Structured Versus Unstructured Observation: For structured


observation, the researcher specifies in detail what is to be
observed and how the measurements are to be recorded. In
unstructured observation, the observer monitors all aspects of
the phenomenon that seem relevant to the problem at hand.
2. Disguised Versus Undisguised Observation: In disguised
observation, the respondents are unaware that they are being
observed. In undisguised observation, the respondents are
aware that they are under observation.
3. Natural Versus Contrived Observation: Natural observation
involves observing behavior as it takes places in the
environment. In contrived observation, respondents' behavior is
observed in an artificial environment.
Classification of Observation Methods: Personal Observation

• Personal: Observes actual behavior as it occurs, merely records


what takes place. E.g. recording traffic counts and observing traffic
flow in a department store.

• Mechanical: Do not require respondents' direct participation or


involvement. Use on-site cameras, eye-tracking monitors, optical
scanners etc.

• Audit: Examining physical records or performing inventory analysis,


data are based upon counts, usually of physical objects.

• Content Analysis: Involves description of the manifest content of a


communication. The unit of analysis may be words, characters,
themes etc.

• Trace Analysis: Data collection is based on physical traces, or


evidence, of past behavior. The number of different fingerprints on
a page was used to gauge the readership of various
advertisements in a magazine.
Relative Advantages of Observation

• They permit measurement of actual behavior rather than


reports of intended or preferred behavior.

• There is no reporting bias, and potential bias caused by


the interviewer and the interviewing process is eliminated
or reduced.

• Certain types of data can be collected only by observation.

• If the observed phenomenon occurs frequently or is of


short duration, observational methods may be cheaper
and faster than survey methods.
Relative Disadvantages of Observation

• The reasons for the observed behavior may not be determined


since little is known about the underlying motives, beliefs,
attitudes, and preferences.
• Selective perception can bias the data.
• Time-consuming and expensive, and it is difficult to observe
certain forms of behavior.
• In some cases, the use of observational methods may be
unethical, as in observing people without their knowledge or
consent.
• It is best to view observation as a complement to survey
methods, rather than as being in competition with them.
Causal Research Design: Experimentation
Conditions for Causality

• Concomitant variation is the extent to which a cause,


X, and an effect, Y, occur together or vary together in
the way predicted by the hypothesis under
consideration.
• The time order of occurrence condition states that the
causing event must occur either before or
simultaneously with the effect; it cannot occur
afterwards.
• The absence of other possible causal factors means
that the factor or variable being investigated should
be the only possible causal explanation.
Experimental Design

An experimental design is a set of procedures


specifying:

 the test units and how these units are to be divided into
homogeneous subsamples,
 what independent variables or treatments are to be
manipulated,
 what dependent variables are to be measured; and
 how the extraneous variables are to be controlled.
Validity in Experimentation

• Internal validity refers to whether the manipulation of


the independent variables or treatments actually
caused the observed effects on the dependent
variables. Control of extraneous variables is a
necessary condition for establishing internal validity.

• External validity refers to whether the cause-and-effect


relationships found in the experiment can be
generalized. To what populations, settings, times,
independent variables, and dependent variables can
the results be projected?
A Classification of Experimental Designs

Experimental Designs

Pre-experimental True Quasi


Statistical
Experimental Experimental

One-Shot Case Pretest-Posttest Time Series Randomized


Study Control Group Blocks

One Group Posttest: Only Multiple Time Latin Square


Pretest-Posttest Control Group Series

Static Group Solomon Four- Factorial


Group Design
Limitations of Experimentation

• Experiments can be time consuming, particularly if the


researcher is interested in measuring the long-term
effects.
• Experiments are often expensive. The requirements of
experimental group, control group, and multiple
measurements significantly add to the cost of research.
• Experiments can be difficult to administer. It may be
impossible to control for the effects of the extraneous
variables, particularly in a field environment.
• Competitors may deliberately contaminate the results of
a field experiment.
Data Sources
Primary Vs. Secondary Data

• Primary data are originated by a researcher for


the specific purpose of addressing the problem at
hand.
• Secondary data are data that have already been
collected for purposes other than the problem at
hand. These data can be located quickly and
inexpensively.
Criteria for Evaluating Secondary Data
i. Specifications: Methodology used to collect the
data
ii. Error: Accuracy of the data
iii. Currency: When the data were collected
iv. Objective(s): The purpose for which the data
were collected
v. Nature: The content of the data
vi. Dependability: Overall dependability of the data
Classification of Secondary Data

1. Internal Secondary Data

2. Published External Secondary Sources

3. Computerized Databases

4. Syndicate Sources
Uses of Secondary Data

• Identify the problem


• Better define the problem
• Develop an approach to the problem
• Formulate an appropriate research design
• Answer certain research questions and test some
hypotheses
• Interpret primary data more insightfully
Measurement and Scaling:
Fundamentals and
Comparative Scaling
Measurement and Scaling

Measurement means assigning numbers or other


symbols to characteristics of objects according to
certain pre-specified rules.
Scale Characteristics

Description: Unique labels or descriptors that are used to


designate each value of the scale. All scales possess
description.

Order: Relative sizes or positions of the descriptors. Order


is denoted by descriptors such as greater than, less than,
and equal to.

Distance: i.e. the absolute differences between the


scale descriptors are known and may be expressed
in units.

Origin: The scale has a unique or fixed beginning or


true zero point.
Primary Scales of Measurement

• Nominal Scale
• Ordinal Scale
• Interval Scale
• Ratio Scale
A Comparison of Scaling Techniques

• Comparative scales involve the direct comparison


of stimulus objects. Comparative scale data must be
interpreted in relative terms and have only ordinal or
rank order properties.

• In non-comparative scales, each object is scaled


independently of the others in the stimulus set. The
resulting data are generally assumed to be interval
or ratio scaled.
Comparative Scaling

• Paired Comparison
• Rank Order
• Constant Sum
Non-comparative Scaling

• Respondents evaluate only one object at a time,


and for this reason non-comparative scales are
often referred to as monadic scales.

• Non-comparative techniques consist of continuous


and itemized rating scales.

• The commonly used itemized rating scales are the


Likert, semantic differential, and Stapel scales.
Scale Evaluation: Reliability and Validity

• Reliability can be defined as the extent to which


measures are free from random error
• Internal consistency reliability determines the extent to
which different parts of a summated scale are consistent
in what they indicate about the characteristic being
measured.
• In split-half reliability, the items on the scale are divided
into two halves and the resulting half scores are
correlated.
• The coefficient alpha, or Cronbach's alpha, is the
average of all possible split-half coefficients resulting
from different ways of splitting the scale items.
Validity
• The validity of a scale may be defined as the extent to
which differences in observed scale scores reflect true
differences among objects on the characteristic being
measured, rather than systematic or random error.
• Content validity is a subjective but systematic evaluation
of how well the content of a scale represents the
measurement task at hand.
• Criterion validity reflects whether a scale performs as
expected in relation to other variables selected (criterion
variables) as meaningful criteria.
• Construct validity addresses the question of what
construct or characteristic the scale is, in fact,
measuring. Construct validity includes convergent,
discriminant, and nomological validity.
• Convergent validity is the extent to which the scale
correlates positively with other measures of the same
construct.
• Discriminant validity is the extent to which a measure
does not correlate with other constructs from which it is
supposed to differ.
• Nomological validity is the extent to which the scale
correlates in theoretically predicted ways with measures
of different but related constructs.
Relationship Between Reliability and Validity

• If a measure is perfectly valid, it is also perfectly reliable.


• If a measure is unreliable, it cannot be perfectly valid
Thus, unreliability implies invalidity.
• If a measure is perfectly reliable, it may or may not be
perfectly valid, because systematic error may still be
present.
• Reliability is a necessary, but not sufficient condition for
validity.
Questionnaire Design
Questionnaire Design Process
Specify the Information Needed

Specify the Type of Interviewing Method

Determine the Content of Individual Questions

Design the Question to Overcome the Respondent’s Inability and


Unwillingness to Answer

Decide the Question Structure

Determine the Question Wording

Arrange the Questions in Proper Order

Identify the Form and Layout

Reproduce the Questionnaire

Pre-testing
Key Considerations

• Is the Question Necessary?


• Double-barreled question- To obtain the required
information, two distinct questions should be asked.
• Overcoming inability to answer/ articulate- A ‘don't
know’ option appears to reduce uninformed responses
without reducing the response rate.
• Overcoming unwillingness to answer- Context, legitimate
purpose, sensitive information
Question Structure

• Unstructured questions
e.g. What is your occupation?

• Structured
Multiple choice or Dichotomous
Question Wording

• Define the issue in terms of who, what, when, where, why,


and way (the six Ws).

• Use simple/ ordinary and unambiguous words.

• Avoid leading questions.

• Avoid implicit assumptions, generalizations and estimates.


Determining the Order of Questions
Opening Questions
• The opening questions should be interesting, simple, and
non-threatening.

Type of Information
• Basic information should be obtained first, followed by
classification, and, finally, identification information.

Difficult Questions
• Difficult questions or questions which are sensitive,
embarrassing, complex, or dull, should be placed late in
the sequence.

Effect on Subsequent Questions


• General questions should precede the specific questions
(funnel approach).
Form and Layout

• Divide a questionnaire into several parts.

• Numbered questions

• Pre-coding

• Pre-testing the questionnaire


Sampling: Design and Procedures
The Sampling Design Process

Define the Population

Determine the Sampling Frame

Select Sampling Technique(s)

Determine the Sample Size

Execute the Sampling Process


Classification of Sampling Techniques

Sampling Techniques

Nonprobability Probability
Sampling Techniques Sampling Techniques

Convenience Judgmental Quota Snowball


Sampling Sampling Sampling Sampling

Simple Random Systematic Stratified Cluster Other Sampling


Sampling Sampling Sampling Sampling Techniques
Choosing Nonprobability Vs. Probability Sampling

Conditions Favoring the Use of


Factors Nonprobability Probability
sampling sampling

Nature of research Exploratory Conclusive

Relative magnitude of sampling Nonsampling Sampling


and nonsampling errors errors are errors are
larger larger

Variability in the population Homogeneous Heterogeneous


(low) (high)

Statistical considerations Unfavorable Favorable

Operational considerations Favorable Unfavorable


Sample Size Determination

“How large a sample do I need?”


Sample Size Calculation when Estimating Means

• Continuous or Interval-scaled Variables


• Formula:
n= [Zs/e]2

Where,
Z: represents the Z score from the standard normal
distribution for the confidence level desired by the
researcher.
(e.g. for 90% confidence level, Z score is 1.645,
for 95% confidence level, Z score would be 1.96
for 99% confidence level, Z is 2.58)
s: represents the population standard deviation for the
variable which we are trying to measure from the study.
e: is the tolerable error in estimating the variable in
question.
(Lower the tolerance, higher will be the sample size)
Sample Size Calculation when Estimating Proportions

• Variable being estimated is a proportion or a percentage


• Formula:
n= pq [Z/e]2

Where,
p: represents the frequency of occurrence of something
expressed as proportion.
q: represents the frequency of non-occurrence of something
expressed as proportion.
Z: represents the Z score from the standard normal distribution
for the confidence level desired by the researcher.
e: is the tolerable error in estimating the variable in question.
(Lower the tolerance, higher will be the sample size)
Improving Response Rates

Methods of Improving
Response Rates

Reducing Reducing
Refusals Not-at-Homes

Prior Motivating Incentives Questionnaire Follow-Up Other


Notification Respondents Design Facilitators
and
Administration

Callbacks
Adjusting for Nonresponse

• Subsampling of Nonrespondents – the researcher


contacts a subsample of the nonrespondents, usually by
means of telephone or personal interviews.

• In replacement, the nonrespondents in the current survey


are replaced with nonrespondents from an earlier, similar
survey. The researcher attempts to contact these
nonrespondents from the earlier survey and administer
the current survey questionnaire to them, possibly by
offering a suitable incentive.

• In substitution, the researcher substitutes for non-


respondents from other elements from the sampling
frame that are expected to respond.
• Subjective Estimates – When it is no longer feasible to
increase the response rate by subsampling, replacement,
or substitution, it may be possible to arrive at subjective
estimates of the nature and effect of nonresponse bias.
This involves evaluating the likely effects of nonresponse
based on experience and available information.

• Trend analysis is an attempt to discern a trend between


early and late respondents. This trend is projected to
nonrespondents to estimate where they stand on the
characteristic of interest.
• Weighting attempts to account for non-response by
assigning differential weights to the data depending on
the response rates.

• Imputation involves imputing, or assigning, the


characteristic of interest to the non-respondents based
on the similarity of the variables available for both non-
respondents and respondents. For example, a
respondent who does not report brand usage may be
imputed the usage of a respondent with similar
demographic characteristics.
Fieldwork

Selecting Field Workers

Training Field Workers

Supervising Field Workers

Validating Fieldwork

Evaluating Field Workers


Data Preparation

Prepare Preliminary Plan of Data Analysis

Check Questionnaire

Edit

Code

Transcribe

Clean Data

Statistically Adjust the Data

Select Data Analysis Strategy


Questionnaire Checking
A questionnaire returned from the field may be unacceptable
for several reasons.
• Parts of the questionnaire may be incomplete.
• The pattern of responses may indicate that the
respondent did not understand or follow the
instructions.
• The responses show little variance.
• One or more pages are missing.
• The questionnaire is received after the pre-established
cutoff date.
• The questionnaire is answered by someone who does
not qualify for participation.
Editing

Treatment of Unsatisfactory Results


• Returning to the Field – The questionnaires with
unsatisfactory responses may be returned to the
field, where the interviewers recontact the
respondents.
• Assigning Missing Values – If returning the
questionnaires to the field is not feasible, the editor
may assign missing values to unsatisfactory
responses.
• Discarding Unsatisfactory Respondents – In
this approach, the respondents with unsatisfactory
responses are simply discarded.
Coding

Coding means assigning a code, usually a number, to


each possible response to each question. The code
includes an indication of the column position (field) and
data record it will occupy.
Data Cleaning Treatment of Missing Responses

• Substitute a Neutral Value – A neutral value, typically


the mean response to the variable, is substituted for the
missing responses.
• Substitute an Imputed Response – The respondents'
pattern of responses to other questions are used to impute
or calculate a suitable response to the missing questions.
• In case wise deletion, cases, or respondents, with any
missing responses are discarded from the analysis.
• In pair wise deletion, instead of discarding all cases with
any missing values, the researcher uses only the cases or
respondents with complete responses for each calculation.
Statistically Adjusting the Data – Variable Re-specification

• Variable re-specification involves the transformation of


data to create new variables or modify existing variables.

e.g., the researcher may create new variables that are


composites of several other variables.
Statistically Adjusting the Data –
Scale Transformation and Standardization

Scale transformation involves a manipulation of scale


values to ensure comparability with other scales or
otherwise make the data suitable for analysis.

A more common transformation procedure is


standardization. Standardized scores, Zi, may be
obtained as: Zi = (Xi - μ )/sx
Data Analysis

Univariate Techniques

Metric Data Non-numeric Data

One Sample Two or More One Sample Two or More


Samples Samples
* t test * Frequency
* Z test * Chi-Square
* Binomial
Independent Related
Independent Related
* Z test * Paired
* One-Way * Chi-Square * Sign
t test * Mann-Whitney
ANOVA * Wilcoxon
* Median * McNemar
* K-S * Chi-Square
* K-W ANOVA
Multivariate Techniques

Dependence Interdependence
Technique Technique

One Dependent More Than One Variable Interobject


Variable Dependent Interdependence Similarity
Variable
* Cluster Analysis
* Cross-Tabulation * Canonical * Factor * Multidimensional
* Analysis of Correlation Analysis Scaling
Variance and * Discriminant * CFA
Covariance Analysis
* Multiple * SEM
Regression
* Conjoint Analysis
Steps Involved in Hypothesis Testing

Formulate H0 and H1

Select Appropriate Test


Choose Level of Significance

Collect Data and Calculate Test Statistic

Determine Probability Determine Critical Value of


Associated with Test Statistic Test Statistic TSCR

Determine if TSCAL falls


Compare with Level of into (Non) Rejection
Significance,  Region
Reject or Do not Reject H0

Draw Marketing Research Conclusion


Summary: Preliminary Analysis of Data

• Metric Data
 Data based on interval or ratio scales
 Parametric tests are used
• Non-Metric Data
 Data based on nominal or ordinal scales
 Non-parametric tests are used
Tests for Data Analysis

• t-test and z-test


 Most commonly used to test for differences,
especially for one or two samples
 One-sample tests – single sample, to test the
hypothesis that the sample is derived from a
specified population
 t- distribution has a narrower bell
 For samples of size more than 120, the t-
distribution and the normal distribution are
almost identical.
• Cross-tabulation
 Used for measurement of observations
occurring simultaneously in data categories of two
or more variables
Determining the association and the strength of
association between two variables

• Chi-Square
 For measuring the significance of association
between two variables when the data are
expressed as frequencies.
 Determining ‘goodness of fit’, i.e. how well the
observed data ‘fits’ the expected distribution of
frequencies.
 Test of independence- to test if the variation in
the values of one has no bearing on the values of
the other.
• Correlation
 Used when two variables are metric and have
similar distributions.

• Multiple Regression
 To determine the strength of the relationship
between a single dependent variable and two or
more independent variables. For measuring the
significance of association

• Analysis of variance
 For testing the difference between the means of
more than two populations
 Dependent variable is metric, independent
variables are all non-metric (usually nominal)
 If independent variables include nominal as well
as metric variables, technique is called ANCOVA
Report Preparation and Presentation
Importance of the Report and Presentation

1. They are the tangible products of the research effort.


2. Management decisions are guided by the report and
the presentation.
3. The involvement of many marketing managers in the
project is limited to the written report and the oral
presentation.
4. Management's decision to undertake marketing
research in the future or to use the particular
research supplier again will be influenced by the
perceived usefulness of the report and the
presentation.
The Report Preparation and Presentation Process

Problem Definition, Approach,


Research Design, and Fieldwork

Data Analysis

Interpretations, Conclusions, and


Recommendations

Report Preparation

Oral Presentation

Reading of the Report by the Client

Research Follow-Up
Report Format

I. Title page
II. Table of contents
III. List of tables
IV. List of graphs
V. List of appendices
VI. List of exhibits
VII. Executive summary
a. Major findings
b. Conclusions
c. Recommendations
VIII. Problem definition
a. Background to the problem
b. Statement of the problem

IX. Approach to the problem

X. Research design
a. Type of research design
b. Information needs
c. Data collection from secondary sources
d. Data collection from primary sources
e. Scaling techniques
f. Questionnaire development and pretesting
g. Sampling techniques
h. Fieldwork
XI. Data analysis
a. Methodology
b. Plan of data analysis

XII. Results

XIII. Limitations

XIV. Conclusions and recommendations

XV. Exhibits
a. Questionnaires and forms
b. Statistical output
c. Lists
Report Writing

• Easy to follow
• Presentable and professional appearance
• Objective
• Reinforce text with tables and graphs
• Terse
Guidelines for Tables
• Title and number
• Arrangement of data items
• Basis of measurement
• Rulings and spaces
• Headings, stubs, and footnotes
• Sources of the data
Graphical Representation of Data

• Round or Pie charts

• Line charts

• Pictographs

• Histograms and Bar charts

• Scatter plots
Oral Presentation

• The key to an effective presentation is preparation.


• A written script or detailed outline should be prepared
following the format of the written report.
• The presentation must be geared to the audience.
• The presentation should be rehearsed several times
before it is made to the management.
• Visual aids, such as tables and graphs, should be
displayed with a variety of media.
• It is important to maintain eye contact and interact with
the audience during the presentation.
• The presentation should terminate with a strong closing.
Reading the Research Report

• Addresses the Problem


• Clearly Describe the research design
• Execution of the Research Procedures
• Numbers and statistics
• Interpretation and Conclusions
• Generalizability
• Disclosure
Research Follow-up

• Assisting the Client – The researcher should


answer questions that may arise and help the
client to implement the findings.

• Evaluation of the Research Project – Every


marketing research project provides an
opportunity for learning and the researcher
should critically evaluate the entire project to
obtain new insights and knowledge.
The Nature of Ethical Issues in Marketing
Research

1. Ethical Issues involving Protection of the Public

2. Ethical Issues involving Protection of Respondents

3. Ethical Issues involving the Protection of the


Client

4. Ethical Issues involving the Protection of the


Researcher/ Research Firm
Ethical Issues involving Protection of the Public

• Incomplete Reporting

• Misleading Reporting

• Nonobjective Research
Ethical Issues involving Protection of Respondents

• Use of ‘Marketing Research’ Guise to Sell Products

• Invasion of the privacy of the respondent


Ethical Issues involving the Protection of the
Client

• Abuse of position arising from specialized knowledge

• Unnecessary research

• An unqualified researcher

• Disclosure of identity

• Treating data as non-confidential and/or non-


proprietary

• Misleading presentation of data


Ethical Issues involving the Protection of the
Researcher/ Research Firm

• Improper solicitation of proposals

• Disclosure of proprietary information on techniques

• Misrepresentation of findings
Ethical Issues in Different Stages of Research
Ethical Issues in Marketing Research

I Problem Definition
- Using surveys as a guise for selling or fundraising
- Personal agendas of the researcher or client
- Conducting unnecessary research

IIDeveloping an Approach
- Using findings and models developed for specific
clients or projects for other projects
- Soliciting proposals to gain research expertise
without pay
- Inaccurate reporting
III Research Design

- Formulating a research design more suited to the


researcher's rather than the client's needs

- Using secondary data that are not applicable or have


been

gathered through questionable means

- Disguising the purpose of the research

- Soliciting unfair concessions from the researcher

- Not maintaining anonymity of respondents


- Disrespecting privacy of respondents
- Misleading respondents

- Disguising observation of respondents

- Embarrassing or putting stress on respondents

- Using measurement scales of questionable reliability and


validity

- Designing overly long/sensitive questionnaires

- Using inappropriate sampling procedures and sample size


IV Field Work
- Increasing discomfort level of respondents
- Following unacceptable field work procedures

V Data Preparation and Analysis


- Identifying and discarding unsatisfactory respondents
- Using statistical techniques when the underlying assumptions
are violated
- Interpreting the results and making incorrect conclusions and
recommendations

VI Report Preparation and Presentation


- Incomplete reporting
- Biased reporting
- Inaccurate reporting

S-ar putea să vă placă și