Sunteți pe pagina 1din 12

1) Explain the four types of experimental designs.

1 Pre experimental design


There are three design under this one short case study where
observation is taken after the application of treatment, one
group pre test-post test design where one observation is taken
prior to the application of the treatment, and static group
comparison, where there are two groups experimental group
and control group. The experimental group is subjected to the
treatment and post-test measurement is taken. In the control
group measurement is taken at the time when it was done for
experimental group. These do not make use of any
randomization procedure to control the extraneous variables.
Therefore, the internal validity of such designs is questionable.
2- Quasi- experimental design
In these the researcher can control when measurements are
taken and on whom they are taken. However this design lacks
complete control of scheduling of treatment and also lacks the
ability to randomize the test units exposure to treatments. As
the experimental control is lacking, the possibility of getting
confounded results is very high. Therefore, the researchers
should be aware of what variables are not controlled and the
effects of such variables should be incorporated into the
findings.
3- True Experimental Designs
In these designs, researchers can randomly assign test units
and treatments to an experimental group. Here, the researcher
is able to eliminate the effect of extraneous variables from both
the experimental and control group. Randomization procedure
allows the researcher the use of statically techniques for
analysis the experimental results.

4- Statistical Designs
These design allow the statistical control and analysis of
external variables. The main advantages of statistical design
are the following::
- The effect of more than one level of independent variable
on the dependent variable can be manipulate.
- The effect of more than one independent variable can be
examined.
- The effect of specific extraneous variable can be
controlled.

2) Briefly explain the concepts of reliability, validity


and sensitivity.
a- Reliability
Reliability does not imply validity. That is, a reliable measure
that is measuring something consistently is not necessarily
measuring what you want to be measuring. For example, while
there are many reliable tests of specific abilities, not all of them
would be valid for predicting, say, job performance.
While reliability does not imply validity, a lack of reliability does
place a limit on the overall validity of a test. A test that is not
perfectly reliable cannot be perfectly valid, either as a means of
measuring attributes of a person or as a means of predicting
scores on a criterion. While a reliable test may provide useful
valid information, a test that is not reliable cannot possibly be
valid
In practice, testing measures are never perfectly consistent.
Theories of test reliability have been developed to estimate the
effects of inconsistency on the accuracy of measurement. The
basic starting point for almost all theories of test reliability is
the idea that test scores reflect the influence of two sorts of
factors:

1. Factors that contribute to consistency: stable characteristics


of the individual or the attribute that one is trying to measure
2. Factors that contribute to inconsistency: features of the
individual or the situation that can affect test scores but have
nothing to do with the attribute being measured.
b- Validity
Validity is the extent to which a concept,[1] conclusion or
measurement is well-founded and corresponds accurately to
the real world. The word "valid" is derived from the Latin
validus, meaning strong. The validity of a measurement tool
(for example, a test in education) is considered to be the
degree to which the tool measures what it claims to measure;
in this case, the validity is an equivalent to accuracy.
In psychometrics, validity has a particular application known as
test validity: "the degree to which evidence and theory support
the interpretations of test scores" ("as entailed by proposed
uses of tests").[2]
It is generally accepted that the concept of scientific validity
addresses the nature of reality and as such is an
epistemological and philosophical issue as well as a question of
measurement. The use of the term in logic is narrower, relating
to the truth of inferences made from premises.
Validity is important because it can help determine what types
of tests to use, and help to make sure researchers are using
methods that are not only ethical, and cost-effective, but also a
method that truly measures the idea or construct in question.
C Sensitivity
Sensitivity analysis is the study of how the uncertainty in the
output of a mathematical model or system (numerical or
otherwise) can be apportioned to different sources of
uncertainty in its inputs.[1][2] A related practice is uncertainty
analysis, which has a greater focus on uncertainty
quantification and propagation of uncertainty. Ideally,
uncertainty and sensitivity analysis should be run in tandem.
The process of recalculating outcomes under alternative

assumptions to determine the impact of a variable under


sensitivity analysis can be useful for a range of purposes,[3]
including
Testing the robustness of the results of a model or system in
the presence of uncertainty.
Increased understanding of the relationships between input and
output variables in a system or model.
Uncertainty reduction: identifying model inputs that cause
significant uncertainty in the output and should therefore be
the focus of attention if the robustness is to be increased
(perhaps by further research).
Searching for errors in the model (by encountering unexpected
relationships between inputs and outputs).
Model simplification fixing model inputs that have no effect on
the output, or identifying and removing redundant parts of the
model structure.
Enhancing communication from modelers to decision makers
(e.g. by making recommendations more credible,
understandable, compelling or persuasive).
Finding regions in the space of input factors for which the
model output is either maximum or minimum or meets some
optimum criterion (see optimization and Monte Carlo filtering).
In case of calibrating models with large number of parameters,
a primary sensitivity test can ease the calibration stage by
focusing on the sensitive parameters. Not knowing the
sensitivity of parameters can result in time being uselessly
spent on non-sensitive ones.

3 - What are the advantages and disadvantages of the


questionnaire method? Illustrate with suitable
examples.

Advantages
- Probably the greatest benefits of the method is its
adaptability. There is, actually speaking, no domain or
branch for which a questionnaire cannot be designed. It
can be shaped in a manner that can be easily understood
by the population user the study. The language, the
content and the manner of questioning can be modified
suitably. The instrument is particularly suitable for studies
that are trying to establish the reason for certain
occurrences or behaviour.
- The second advantage is that it assures anonymity if it is
self-administered by the respondent, as there is no
pressure or embarrassment in revealing sensitive data. A
lot of questionnaires do not even require the person to fill
in his/her name. Administering the questionnaire is much
faster and less expensive as compared to other primary
and few secondary sources as well. There is considerable
ease of quantitative coding and analysis of the obtained
information as most response categories are closed ended
and based on the measurement levels as discussed.
Disadvantages
- The major disadvantage is that the inexpensive
standardized instrument has limited applicsbility, that is, it
can be used only with those who can read and write.
- The questionnaire is an impersonal method and
sometimes for a sensitive issue it may not reveal the
actual reasons or answers to the questions that you asked.
The return ratio, i.e. the no. of people who returned to duly
filled in questionnaires are some time not even percent of
the no. of form distributed.
- Skewed sample response could be another problem. This
can occur in two cases; one, if the investigator distributes
the same to his friends and acquaintances and second,
because of self-selection of the subjects. This mean that
the one who fill in the questionnaire and return might not
be the representative of population at large. In case the
person is not clear about a question, clarification with the
researcher might not possible.

4- What is data editing? Mention its significance.


Data editing is the process that involves detecting correcting
errors in data. After collection, the data is subjected to
processing. Processing requires that the researcher must go
over all the raw data forms and check them for errors.
Interactive editing
The term interactive editing is commonly used for modern
computer-assisted manual editing. Most interactive data editing
tools applied at National Statistical Institutes (NSIs) allow one to
check the specified edits during or after data entry, and if
necessary to correct erroneous data immediately. Several
approaches can be followed to correct erroneous data:
Recontact the respondent
Compare the respondent's data to his data from previous
year
Compare the respondent's data to data from similar
respondents
Use the subject matter knowledge of the human editor
Interactive editing is a standard way to edit data. It can be used
to edit both categorical and continuous data.[3] Interactive
editing reduces the time frame needed to complete the cyclical
process of review and adjustment.[4]
Selective editing
Selective editing is an umbrella term for several methods to
identify the influential errors, [note 1] and outliers.[note 2]
Selective editing techniques aim to apply interactive editing to

a well-chosen subset of the records, such that the limited time


and resources available for interactive editing are allocated to
those records where it has the most effect on the quality of the
final estimates of publication figures. In selective editing, data
is split into two streams:
The critical stream
The non-critical stream
The critical stream consists of records that are more likely to
contain influential errors. These critical records are edited in a
traditional interactive manner. The records in the non-critical
stream which are unlikely to contain influential errors are not
edited in a computer assisted manner.
The significance of validation become more important in the
following cases:
- In case the form had been translated into other language,
expert analysis is done to see whether the meaning of the
questions in the two measures is the same or not.
- The second case could be that the questionnaire survey
has to be done at multiple locations and it has been
outsourced to an outside research agency.
- The respondent seems to have used the same response
category for all the questions, for eg, there is a tendency
on a five point scale to give 3 as the answer for all
questions.
- The form that is received back is incomplete, in the sense
that either the person has not filled the answer to all
questions, or in case of a multiple page questionnaire, one
or more pages are missing.
- The forms received are not in the proportion of the
sampling plan. In such a case the researcher either would
need to be discard the extra forms or get an equal number
filled in form private sector employee.

5) Differentiate between descriptive and inferential


analysis of data.
Descriptive Statistics
Descriptive statistics is the term given to the analysis of data
that helps describe, show or summarize data in a meaningful
way such that, for example, patterns might emerge from the
data. Descriptive statistics do not, however, allow us to make
conclusions beyond the data we have analysed or reach
conclusions regarding any hypotheses we might have made.
They are simply a way to describe our data.
Descriptive statistics are very important because if we simply
presented our raw data it would be hard to visulize what the
data was showing, especially if there was a lot of it. Descriptive
statistics therefore enables us to present the data in a more
meaningful way, which allows simpler interpretation of the
data. For example, if we had the results of 100 pieces of
students' coursework, we may be interested in the overall
performance of those students. We would also be interested in
the distribution or spread of the marks. Descriptive statistics
allow us to do this. How to properly describe data through
statistics and graphs is an important topic and discussed in
other Laerd Statistics guides. Typically, there are two general
types of statistic that are used to describe data:
- Measures of central tendency: these are ways of
describing the central position of a frequency distribution
for a group of data. In this case, the frequency distribution
is simply the distribution and pattern of marks scored by
the 100 students from the lowest to the highest. We can
describe this central position using a number of statistics,
including the mode, median, and mean.
- Measures of spread: these are ways of summarizing a
group of data by describing how spread out the scores are.

For example, the mean score of our 100 students may be


65 out of 100. However, not all students will have scored
65 marks. Rather, their scores will be spread out. Some
will be lower and others higher. Measures of spread help
us to summarize how spread out these scores are. To
describe this spread, a number of statistics are available
to us, including the range, quartiles, absolute deviation,
variance and standard deviation.
Inferential Statistics
We have seen that descriptive statistics provide information
about our immediate group of data. For example, we could
calculate the mean and standard deviation of the exam marks
for the 100 students and this could provide valuable
information about this group of 100 students. Any group of data
like this, which includes all the data you are interested in, is
called a population. A population can be small or large, as long
as it includes all the data you are interested in. For example, if
you were only interested in the exam marks of 100 students,
the 100 students would represent your population. Descriptive
statistics are applied to populations, and the properties of
populations, like the mean or standard deviation, are called
parameters as they represent the whole population (i.e.,
everybody you are interested in).
Often, however, you do not have access to the whole
population you are interested in investigating, but only a
limited number of data instead. For example, you might be
interested in the exam marks of all students in the UK. It is not
feasible to measure all exam marks of all students in the whole
of the UK so you have to measure a smaller sample of students
(e.g., 100 students), which are used to represent the larger
population of all UK students. Properties of samples, such as
the mean or standard deviation, are not called parameters, but
statistics. Inferential statistics are techniques that allow us to
use these samples to make generalizations about the
populations from which the samples were drawn. It is,

therefore, important that the sample accurately represents the


population. The process of achieving this is called sampling
(sampling strategies are discussed in detail here on our sister
site). Inferential statistics arise out of the fact that sampling
naturally incurs sampling error and thus a sample is not
expected to perfectly represent the population.
The methods of inferential statistics are (1) the estimation of
parameter(s) and (2) testing of statistical hypotheses.

6) Explain the Structure of the Research Report. What


are the guidelines for effective report writing?
The reporting requires a structured format and by and large,
the process is standardized. As stated above, the major
difference amongst the types of reports is that all the elements
that make a research report would be present only in a detailed
technical report. Usage of theoretical and technical jargon
would be higher in the technical report and visual presentation
of data would be higher in the management report.
The process of report formulation and presentation is present.
As can be observed, the preliminary section includes the title
page, followed by the letter of authorization,
acknowledgements, executive summary and the table of
contents. Then come the background section, which includes
the problem statement, introduction, study background, scope
and objectives of the study and the review of literature
(depends on the purpose). This is followed by the methodology
section, which, as stated earlier, is again specific to the
technical report. This is followed by the findings section and
then come the conclusions. The technical report would have a
detailed bibliography at the end.

In the management report, the sequencing of the report might


be reversed to suit the needs of the decision-maker, as here the
reader needs to review and absorb the findings. Thus, the last
section on interpretation of findings would be presented
immediately after the study objectives and a short reporting on
methodology could be presented in the appendix.
Report Guidelines for effective report writing
Clear report mandate: While writing the research problem
statement and study background, the writer needs to be
absolutely clear in terms of why and how the problem was
formulated. Clearly designed methodology: Any research study
has its unique orientation and scope and thus has a specific
and customized research design, sampling and data collection
plan. In researches, that are not completely transparent on the
set of procedures, one cannot be absolutely confident of the
findings and resulting conclusions.
Clear representation of findings: Complete honesty and
transparency in stating the treatment of data and editing of
missing or contrary data is extremely critical.
Representativeness of study finding: A good research report is
also explicit in terms of extent and scope of the results
obtained, and in terms of the applicability of findings.

Thus, some guidelines should be kept in mind while writing the


report.
Command over the medium: A correct and effective
language of communication is critical in putting ideas and
objectives in the vernacular of the reader/decision-maker.

Phrasing protocol: There is a debate about whether or not


one makes use of personal pronoun while reporting. The use of
personal pronoun such as I think.. or in my opinion..
lends a subjectivity and personalization of judgement. Thus, the
tone of the reporting should be neutral.
For example: Given the nature of the forecasted growth and
the opinion of the respondents, it is likely that the
Whenever the writer is reproducing the verbatim information
from another document or comment of an expert or published
source, it must be in inverted commas or italics and the author
or source should be duly acknowledged.
For example: Sarah Churchman, Head of Diversity, Price water
house Coopers, states At PricewaterhouseCoopers we firmly
believe that promoting worklife balance is a business-critical
issue and not simply the right thing to do.The writer should
avoid long sentences and break up the information in clear
chunks, so that the reader can process it with ease.
Simplicity of approach: Along with grammatically and
structurally correct language, care must be taken to avoid
technical jargon as far as possible. In case it is important to use
certain terminology, then, definition of these terms can be
provided in the glossary of terms at the end of the report.
Report formatting and presentation: In terms of paper
quality, page margins and font style and size, a professional
standard should be maintained. The font style must be uniform
throughout the report. The topics, subtopics, headings and
subheadings must be construed in the same manner
throughout the report. The researcher can provide data relief
and variation by adequately supplementing the text with
graphs and figures.

S-ar putea să vă placă și