Sunteți pe pagina 1din 42

CHAPTER 4

UNDERSTANDING DATA AND WAYS TO


SYSTEMATICALLY COLLECT DATA
Research Design

 refer to the overall plan and scheme for conducting the study.

 Descriptive Design
 Experimental Design
 Historical Design

These designs are discussed in a separate section of this chapter.


Descriptive Research Designs

 is used to described characteristics of a population or phenomenon


being studied, status of identified variable such as events, people or
subjects. It’s usually makes some type of comparison contrast and
correlation.

Examples of descriptive research designs are the following:


1. Descriptive Normative Survey

 describe trends in a large population of individuals. It combines two


research methods: gathering information to describe the object of the
study (descriptive method) and critiquing of the object to identify ways to
improve (normative method).
 A survey is a good procedure to use.
 Survey design are procedures in quantitative research in which you
administer a survey questionnaire to a small group of people (called the
sample) to identify trends in attitudes, opinions, behavior, or charateristics
of a large group of people (called population).
2. Correlational Research Studies

 To estimate the extent to which different variables are related to one


another in the population of interest, make use of correlational studies..
 Correlational studies is desired to determine the extent of the relationship
between managerial effectiveness and the variable age, educational
attainment and mental ability.
 Correlation implies prediction but the p value to determine strength of the
relationship.
3. Descriptive Evaluative Studies

 includes interviews and mailed questionnaires. Often it involves a group


that is preselected without any base group to compare the results.
 The purpose of the descriptive evaluative study is to judge the “goodness
of a criterion measure”.
Example: To establosh changes in IQ for children 9-10years old, simultaneously
test, 9-10, 11-12, 13-14, 15-16, 17-18 years old.
4. Assessment/Evaluation Studies

 Attempts to determine the effectiveness of efficiency of certain practices


or policies when applied to a group of respondents.
 Assessment studies imply measurement of certain key indicators without
attaching any judgement.
 Assessment and evaluation always go together for one cannot make
judgement without any basis.
Example: Study on the Relative Effectiveness of the Kto12 program .
5. Descriptive Comparative Studies

 establish significant differences between two or more groups of subjects on


the basis of a criterion measure.
Example: Comparing the managerial effectiveness of three groups of
managers A, B, and C.
This type of research usually involves group comparison. The groups in the
study make up the values of the independent variables; for example, gender
(male vs. female ), preschool attendance vs. no preschool attendance, or
children with a working mother vs. children without working mother.)
General Consideration in Descriptive
Research
• This is partly justified by the fact that the types of information generated by
descriptive researcher are valuable baseline data for policy-formulation and
decision making.
There are certain limitations of design that a researcher must be aware of:
a) The lack of control variables in descriptive designs make them less reliable
in terms of actual hypothesis testing. Statistical test may yield different results
when applied to different samples of the same population.
b) Unless the design is a normative survey where the entire population is
considered, conclusions drawn from descriptive designs are at best tentative.
Experimental Research Design

 Known as longitudinal or repeated-measures studies, for obvious reasons.


 They are also referred as interventions, because you do more than just
observe the subjects.
 Uses the scientific method to establish the cause-effect relationship among
a group of variables that make up a study.
1. Pre-test/Post-test Control Group
Design

 This design requires two groups of equivalent standing in terms of a ctiterion


measure ex. achievement or mental ability.
 The first group is designated as the control group while the second group is
the experimental group. Both groups are given same pretest.
 The control group is not subjected to a treatment while the expiremental
group is given the treatment factor. After, both groups given the same
posttest.
Single Group Pre-test Post-test Design

 This group is first given a pretest followed by the usual treatment and then a
posttest is administered.
 A new pretest is then administered to the group followed by the
experimental treatment factor and a final posttest.
Solomon Four Group Design

 This design makes use of four equivalent.


 The first two groups follow the pretest-posttest control group design
 The third group is given no pretest with treatment and a posttest.
 The last group is given no pretest, no treatment, but with a posttest.
In this design the subjects are randomly assigned to two study groups
and two control groups.
Factors Affecting the Experimental Plan

1. History – specific events which occur between the first and second
measurement in addition to the experimental variable may affect the result
of the experiment.
Example: The 2008 economic recession because of the budget crisis many
schools cut back resources. A treatment implemented around that period of
time may be affected by a lack of supporting infrastructure.
2. Maturation – the process of maturing either biological or psychological that
takes place in the individuals (subjects) during the experiment regardless of
event can affect experimental outcomes.
Example: When subjects are tired after completing the training session and
their responses on the post-test are affected.
Factors Affecting the Experimental Plan

3. Testing – subjects may be more aware of the contents of yhe posttest given
to pretest. The pretest becomes a form of post-test.
4. Mortality – subject may drop out of the experimental plan either voluntarily
or involuntarily.
5. Interaction Effects – the interaction of the experimental variable and
extraneous factors such as setting, time and conditions of the experimental
set-up.
6. Measuring Instruments – changes in instruments calibration of instruments,
observers, or scorersay cause changes in the measurements.
Factors Affecting the Experimental Plan

7. Statistical Regression – it also known as regression to the mean. This threat is


caused by the selection of subjects on the basis of extreme score or
characteristics.
8. Differential Selection – this threat to internal validity is introduced when the
researcher is not able to randomly assign subjects to groups and must
make use of previously formed.
9. John Henry Effect – it is an experimental bias which pertains to the
tendency of the subjects in the control group perceive themselves at a
disadvantage, thus working harder to outperform the experimental group.
Historical Research Design

 The purpose of historical research design is to collect, verify, and synthesize


evidence from the past to establish facts that defend or refute your
hypothesis.
 It uses secondary sources and a variety of primary documentary evidence,
such as, logs, diaries, official records, reports, archives, and non textual
information like maps, pictures, audio and visual records.
 This involves of collection historical datas like artifacts, relics, documents
and oral traditions.
 Classical historical research methodology relies upon textual records,
archival research and narrative as a form of historical writing.
Major Process of Historical Research

1. Data Collection – the historian collect data from the past through relics,
fossils or documents found in the activities or through interviews. Old
newpapers, clippings, memoirs, diaries and the like are rich source of
historical data.
2. Analysis of Data – the historian brings together the data collected to the
state of knowledge about the past events using simple to complex
statistical tools for analysis.
3. Report of Findings – historian reporta findings by carefully explaining
discrepancies noted and the probable causes of discrepancies.
Sampling Plans, Designs and
Techniques
Sampling – is the process of getting information from a proper subset of
population. The fundamental purpose of sampling plans is to describe the
population characteristics through the values obtained from a sample
accurately as possible.
Sampling Plan – is a detailed outline of which measurement will be taken at
what times, on which material, in what manner and by whom that support
the purpose of an analysis.
Steps involved in developing Sampling Plan:
1. Identify the parameters to be measured the range of possible values, and
the required resolution.
2. Design a sampling scheme that details how and when samples will be
taken.
3. Select sample sizes
4. Design data storage formats
5. Assign roles and responsibilities
Sampling Plan for Experimental
Research

In sampling plan for experimental research we shall discuss the minimum


sample size required for the following design:

a) Two-Factor Designs
b) Three Factor Designs
c) Multifactor Designs
Sampling Techniques
Probability Sampling – refers to a sampling technique in which
samples obtained using mechanism involve randomization.

In order to have a random selection method


you must set up some process or procedure
that assures that the different units in your
population have equal probabilities of being
chosen.

There are 5 probability sampling techniques


Which are the following:
Probability Sampling

1. Simple Random Sampling – is the basic sampling


technique where we select a group
of subjects ( a sample) for study
from a larger group (population). Each
Individuals is chosen entirely by chance
and each member of the population has
an equal chance of being included in the
sample.
Probability Sampling

2. Systematic Random Sampling


Is affected by drawing units
at regular intervals from a list. The resear-
cher first randomly picks the first item or
subject from the population. Then, the
researcher will select each subject from the
list.
Probability Sampling

3. Startified Random Sampling – is obtained


by taking samples from each stratum or
sub-group of population and we may
expect the measurement of interest to vary
among the different sub-population.
Probability Sampling

4. Cluster Sampling – is a technique in which


the unit of sampling is not the individual but the
Naturally occuring group of individuals.
Defined as a sampling method where multiple clusters
of people are created from a population where they
are indicative of homogenous characteristics.
Probability Sampling

5. Multi-Stage Sampling – refers to the procedure


in cluster sampling which moves through stages
from more inclusive to less inclusive. It can be a
complex form because it is a type of sampling
that dividing the population into groups
(cluster).
Non Probability Sampling

 is a sampling technique where the samples


are the gathered in a processs that does not give
all the individuals in the population equal chances
of being selected.
Major Forms of Non Probability
Sampling:
1. Accidental Sample – a type of non probability sampling in which the
population selected is easily accesible to the researcher and also called
convenience sampling.
2. Purposive Sampling/ Judgement Sampling – is used when practical
consideration prevent the use of probability sampling.
3. Qouta Sampling – is a technique with provision to guarantee the inclusion in
the sample of diverse elements in the population. In qouta sampling there
are two types of qouta sampling: proportional and non proportional –
represent the major characteristics of the population by sampling a
proportional amount.
Non-proportional Quota Sampling – is a bit less restrictive, you will specify the
minimum number of sampled units.
Instrumentation
 Is the generic term that researcher use for a measurement device like
survey, test, questionnaire and many others.
 Instrument is the device and instrumentation is the course of action which is
the process of developing, testing, and using the device.

Researcher- Subject-completed
completed Instruments Instruments
Rating Scales Questionnaire
Interview Self-checklist
schedules/guides
Tally sheets Attitude scales
Flowcharts Personality inventories
Performance checklist Achievement/aptitud
e test
Time-and-motion logs Projective devices
Observation forms Sociometric devices
Validity

 refers to the extent to which the instrument measure what it intends to


measure and performs as it is designed.
These are the three major types of validity:
Content Validity – the extent to which a research instrument accurately
measures all aspects of a construct. It measures what is supposed to
measure.
Construct Validity – the extent t which a research instrument (or tool)
measures the intended construct.
• Homogeneity – means that instrument measures one construct.
• Convergence – when instrument measures concept similar to
instrument.
• Theory Evidence – when behavior is similar to theoritical propositions of
the construct measured in the instrument.
Validity

Criterion Validity – a researcher instrument is related to other instrument that


measures the same variables.
Criterion validity is measured in three ways:
• Convergent Validity – shows that an instrument is highly correlated
• Divergent Validity – shows an instrument is poorly correlated to instruments
that measure different variables.
• Predictive Validity – means that instrument should have high correlation with
future criterions.
Reliability

 refers to whether not you get the same answer by using an instrument to
measure something more than once. It simple terms research reliability is
the degree to which research methods procedures stable and consistent
results.
The three attributes of reliability:
1. Internal Consistency or Homogeneity
2. Stability or test-retest Correlation
3. Equivalence
Quantitative Data Collection Method

 Quantitative Data Collection Method may include that will results in


numerical values. Common examples of quantitative data collection
strategies may include: Experiments and clinical trials. Survey, interviews
and questionnaire that collect numerical information or count data by
using closed-ended question.

Sources of Data:
• Primary Sources
• Secondary Sources
Data Collection Methods

 There are many ways to collect data, depending on our research


methodologies in research study.
Different techniques in gathering data:
1. Interviews – is a conversation where questions are asked and answers are
given. In common parlance the word interview refers to one on one
conversation between an interview and interviewee.
The following are types of interviews:
• Structured Interview
• Face-to-face interview
• Telephone interview
• Conputer-Assisted Personal Interviewing (CAPI)
Questionnaires

 The purpose of a questionnaire is to help extract data from respondents.


 It serves as a standard guide for the interviewers who need to ask questions in
exactly the same way.
Types of questionnaire:
• Paper-pencils-questionnaires
• Web-based questionnaires
• Self-administered questionnaires
Questionnaires often make use of chechlist and rating scales
Observations

 Is a way of gathering data by watching behavior, events, or nothing


physical characteristics of their natural setting.

 If respondents are unwilling or unable to provide data through


questionnaires or interviews, observation is a method that requires little from
the individuals for whom you need the data.
 Overt when everyone knoes they are being observed or covert when no
one knows they are being observed.
Tests

 provide a way to assess subjects’ knowledge and capacity to apply this


knowledge to new situations.
 provide information that is measured against a variety of standards.
Norm-referenced test – provide information on how the target performs
against a reference group or population.
Proficiency test – provides an assessment against a level of skill attainment.
Secondary Data

 Secondary Data – is a type of quantitative data that has already been


collected by someone else for a purpose different from yours.

Types of data sources and most people tend to use sources:


• Paper-based Sources – are those from books, journals, periodicals, research
reports, encyclopedia etc..
• Electronic Sources – are from CD-ROMs, online, databases, internet,
broadcast, etc..
Quantitative Analysis

 Quantitative Data Analysis – is a systematic approach to investigations


during which numerical data are collected and researcher transform are
collected or observed in numerical data.
 It oftens describes a situation or event , answering the research questions or
objectives.
Operations Research Tools

1. Linear Programming (LP) – used when a problem calls for a maximization or


minimization of a linear function subject to linear constraints.
2. Non-Linear Programming (NLP) – used when a problem calls for a
maximization or minimization non-linear functions.
3. Game Theory – used when a problem calls for optimal strategies between
two or more competitors.
4. Inventory Control – used when the problem requires you to determine
optimum stock levels and reordering points.
5. Simulation and Monte Carlo Methods
The Writing of Methodology

• Assesing the Research of Methodology


1. Participants – describe the participants in your research study, including
who they are, how many they are, how they are selected.
2. Materials – describe the materials, measure, equipments, or stimuli used in
your research study. These include testing instruments, technical
equipments, books, images, or other materials used in the course of your
study.
3. Design – describe the research design used in your research study. Specify
the variables as well as the levels and measurements of these variables.
Explain whether your research study uses a within grouops or berween
groups.
The Writing of Methodology

4. Procedures – the detail of the research procedures used in your research


study should be properly explained. Explain what your
participants/respondents do, how you collected the data, the order in
which steps occurred.