Sunteți pe pagina 1din 16

Statistical Analysis of Data

By:

Usman Siddique 0300-4556959

M.phil Applied Linguistics Lahore Pakistan

Introduction
Raw data can take a variety of forms, including measurements, survey responses, and observations. In its raw form, this information can be incredibly useful, but also overwhelming. Over the course of the data analysis process, the raw data is ordered in a way which will be useful. Data for statistical analysis and report writing are taken from University of Taxes through internet. Data include facts and figures about main areas of the university such as faculty members, students, etc. Data have fifty-one (51) variables and thirteen hundreds and fifty eight cases (individuals). Data are in tabular form, there are fiftyone columns and thirteen hundreds and fifty eight rows. Columns show categories or variables and rows show frequency of individuals or cases.

Objectives
It is impossible to do a sound analysis without knowing what you wish to achieve. Too often an analysis is started without a clear idea of where it is going. The result is usually a lot of wasted time and an inadequate analysis. Avoid this by deciding on the objectives of the analysis before starting it. These are objectives of data analysis: 1. Understanding of data 2. Finding association among variables 3. Finding significant difference between the groups 4. Data reduction

Questions 1. 2. 3. 4. 5.
Is there any significant association between student sex and student grade point average? Is there any significant difference between the faculty rank of males and females? Is there any significant difference between faculty salary and four levels of highest degree earned? Is there any significant difference between student expected grade and student grade point average? Is there any significant correlation between faculty rank and salary?

Analysis of Data
Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, in different business, science, and social science domains. Several analyses can be used during the initial data analysis phase:

Univariate statistics Bivariate associations (correlations) Graphical techniques (scatter plots)

It is important to take the measurement levels of the variables into account for the analyses, as special statistical techniques are available for each level:

Nominal and ordinal variables


o o

o o

Frequency counts (numbers and percentages) Associations circumambulations (cross-tabulations) hierarchical log-linear analysis (restricted to a maximum of 8 variables) log-linear analysis (to identify relevant/important variables and possible confounders) Exact tests or bootstrapping (in case subgroups are small) Computation of new variables

Continuous variables
o

Distribution Statistics (Mean, Standard Deviation, variance, skewness, kurtosis) Stem-and-leaf displays Box plots

Statistical analysis of Data


The statistical data analysis is the data analysis plan to examine the research questions or hypotheses. The statistical data analysis includes the specific statistic to be used and the assumptions of that statistic. Statistical data analysis involves discussing the specific statistic to be used, the assumptions of the statistic, and how the statistic would be interpreted. The sample size requirement is also part of the statistical data analysis, which states how many participants would be required for the study. Descriptive statistics are useful for describing the basic features of data, for example, the summary statistics for the scale variables and measures of the data. In a research study with large data, descriptive statistics may help us to manage the data and present it in a summary table. For instance, in a cricket match, descriptive statistics can help us to manage records of the player and descriptive statistics also help us to compare one players records with another players records. 1. Measure of central tendency: In descriptive statistics, the measure of central tendency measures the average value of the sample. In descriptive statistics, there are two types of averages: the first are the mathematical averages and the second are the positional averages. 2. Measure of dispersion: In descriptive statistics, we can elaborate upon the data further by measuring the dispersion. In descriptive statistics, usually the range of the standard deviation and variance is used

to measure the dispersion. In descriptive statistics, range is defined as the difference between the highest and the lowest value. In descriptive statistics, the standard deviation and variance are usually used to measure the dispersion. Standard deviation is also called the root mean square deviation. Variance is also used to measure the dispersion, which can be simply derived from the square of the standard deviation.

Statistical tools used for data analysis and report writing


Correlation
Correlation is a measure of the relation between two or more variables. The measurement scales used should be at least interval scales, but other correlation coefficients are available to handle other types of data. Correlation coefficients can range from -1.00 to +1.00. The value of -1.00 represents a perfect negative correlation while a value of +1.00 represents a perfect positive correlation. A value of 0.00 represents a lack of correlation. The most widely-used type of correlation coefficient is Pearson r, also called linear or product- moment correlation. Simple Linear Correlation or Pearson correlation assumes that the two variables are measured on at least interval scales, and it determines the extent to which values of the two variables are "proportional" to each other. The value of correlation (i.e., correlation coefficient) does not depend on the specific measurement units used; for example, the correlation between height and weight will be identical regardless of whether inches and pounds, or centimeters and kilograms are used as measurement units.

t-Test for Independent Samples


The t-test is the most commonly used method to evaluate the differences in means between two groups. Theoretically, the t-test can be used even if the sample sizes are very small (e.g., as small as 10; some researchers claim that even smaller n's are possible), as long as the variables are normally distributed within each group and the variation of scores in the two groups is not reliably different. The equality of variances assumption can be verified with the F test, or you can use the more robust Levene's test. If these conditions are not met, then you can evaluate the differences in means between two groups using one of the nonparametric alternatives to the t- test.

Paired sample t-Test


Paired sample t-test helps us to take advantage of one specific type of design in which an important source of within-group variation can be easily identified and excluded from the analysis. Specifically, if two groups of observations are based on the same sample of subjects who were tested twice (e.g., before and after a treatment), then a considerable part of the within-group variation in both groups of scores can be attributed to the initial individual differences between subjects. Specifically, instead of treating each group separately, and analyzing raw scores, we can look only at the differences between the two measures (e.g., "pre-test" and "post test") in each subject. By subtracting the first score from the second for each subject and then analyzing only those "pure (paired) differences," we will exclude the entire part of the variation in our data set that results from unequal base levels of individual subjects.

Cross-tabulation

Cross-tabulation is a combination of two (or more) frequency tables arranged such that each cell in the resulting table represents a unique combination of specific values of cross-tabulated variables. Thus, cross-tabulation allows us to examine frequencies of observations that belong to specific categories on more than one variable. Only categorical (nominal) variables or variables with a relatively small number of different meaningful values should be cross-tabulated. Note that in the cases where we do want to include a continuous variable in a crosstabulation (e.g., income), we can first recode it into a particular number of distinct ranges (e.g., low, medium, high). For example, suppose we conduct a simple study in which males and females are asked to choose one of two different brands of soda pop (brand A and brand B).

Chi-square
Chi-square Test for Association is a test of statistical significance widely used bivariate tabular association analysis. Typically, the hypothesis is whether or not two different populations are different enough in some characteristic or aspect of their behavior based on two random samples. This test procedure is also known as the Pearson chi-square test.

ANOVA
Analysis of variance (ANOVA) is to test for significant differences between means. If we are only comparing two means, ANOVA will produce the same results as the t test for independent samples (if we are comparing two different groups of cases or observations) or the t test for dependent samples (if we are comparing two variables in one set of cases or observations). The purpose of analysis of variance is to test differences in means (for groups or variables) for statistical significance. This is accomplished by analyzing the variance, that is, by partitioning the total variance into the component that is due to true random error and the components that are due to differences between means. These latter variance components are then tested for statistical significance, and, if significant, we reject the null hypothesis of no differences between means and accept the alternative hypothesis that the means (in the population) are different from each other.

Reporting
There are five reports every report is consists of six parts which are as given below:

Problem Statement Null Hypothesis Statistics to be used Findings Conclusion Histogram

Report # 1

Variables to analyze

1. Student sex 2. Grade point average

Statistical Tool
Crosstab

Problem Statement
Is there any significant association between student sex and student grade point average?

Null Hypothesis
There is no significant association between student sex and student grade point average.

Statistics to be used
Since variables represent nominal and ordinal level of measurement, therefore crosstab will be used to see significant association between student sex and grade point average.

STUDENTS SEX * STUDENTS GRADE POINT AVERAGE Crosstabulation Count STUDENTS GRADE POINT AVERAGE

LESS THAN 2.00 STUDENTS SEX MALE FEMALE Total 24 17 41

2.49-2.0 118 117 235

2.99-2.50 163 196 359

3.49-3.0 179 236 415

4.0-3.5 123 177 300

Total 607 743 1350

Chi-Square Tests

Value Pearson Chi-Square Likelihood Ratio Linear-by-Linear Association N of Valid Cases

df 4 4 1

Asymp. Sig. (2-sided) .086 .086 .007

8.164a
8.146 7.347 1350

a. 0 cells (.0%) have expected count less than 5. The minimum expected count is 18.43.

Findings
Cross-tab shows that there is a significant association between student sex and student grade point average. (Chi-square =8.164, sig = .086). The null hypothesis claiming no significant association between student sex and student grade point average, is therefore rejected.

Conclusion
Comparatively more Females students gained higher grades than male students.

Histogram

Report # 2


1.

Variables to analyze

Faculty rank 2. Faculty sex

Independent Sample t-test

Statistical Tool

Problem Statement
Is there any significant difference between the faculty rank of males and females?

Null Hypothesis
There is no significant difference between the faculty rank of males and females.

Statistics to be used
Since faculty rank represent ordinal level of measurement, therefore independent sample t-test will be used to calculate significant difference between the faculty rank of males and females.
Group Statistics FACULTY SEX FACULTY RANK MALE FEMALE N 849 579 Mean 2.64 2.11 Std. Deviation Std. Error Mean 1.047 .858 .036 .036

Independent Samples Test Levene's Test for Equality of Variances t-test for Equality of Means 95% Confidence Interval of the Sig. (2F FACULTY Equal variances RANK assumed Equal variances not assumed 10.568 50.610 Sig. t df 1426 tailed) .000 Mean Std. Error Difference Lower .432 Upper .638

Difference Difference .535 .053

.000 10.182

1.379 E3

.000

.535

.051

.436

.634

Findings

(a) Levenes test indicates that groups of male and female faculty members are not

homogeneous in variability ( f = 50.610, sig = .000) , so the lower row will be used to measure the significant difference.
(b) The t-test indicares that there is significant difference between the faculty rank of males

and females ( t = 10.568, sig = .000) The null hypothesis claiming no significant difference between the faculty rank of males and females, is therefore rejected.

Conclusion
There is no equality of ranks between males and females faculty members.

Histogram

Report # 3

Variables to analyze

1. Faculty salary 2. Four Highest degree levels

Statistical Tool

ANOVA (Analysis of Variance)

Problem Statement
Is there any significant difference between faculty salary and four levels of highest degree earned?

Null Hypothesis
There is no significant difference between faculty salary and four levels of highest degree earned.

Statistics to be used
There are four degree levels to be compared with salary of faculty. Since the salary is compared with four levels of highest degree earned by the faculty members, therefore ANOVA will be used, because four groups of different highest degree levels are compared.

ANOVA
FACULYT SALARY Sum of Squares Between Groups Within Groups Total 81130.118 1427 17519.789 63610.329 df 3 1424 Mean Square 5839.930 44.670 F 130.734 Sig. .000

Multiple Comparisons FACULYT SALARY

LSD
(I) HIGHEST DEGREE EARNED B.A. (J) HIGHEST DEGREE EARNED M.A. PH.D. UNKNOWN M.A. B.A. PH.D. UNKNOWN PH.D. B.A. M.A. UNKNOWN UNKNOWN B.A. M.A. PH.D. -8.038* .456 .000 -8.93 -7.14 Mean Difference (IJ) -3.078 -9.933* -1.895 3.078 -6.855* 1.183 9.933* 6.855* 8.038* 1.895 -1.183 Std. Error 1.710 1.589 1.627 1.710 .697 .779 1.589 .697 .456 1.627 .779 Sig. .072 .000 .244 .072 .000 .129 .000 .000 .000 .244 .129 Lower Bound -6.43 -13.05 -5.09 -.28 -8.22 -.34 6.82 5.49 7.14 -1.30 -2.71 Upper Bound .28 -6.82 1.30 6.43 -5.49 2.71 13.05 8.22 8.93 5.09 .34 95% Confidence Interval

*. The mean difference is significant at the 0.05 level.

Findings
(a) Figures given in the above ANOVA table indicate that there is significant difference

between the faculty salary and four levels of highest degree earned ( f = 130.734, sig = . 000). Crosstab shows that there is a significant association between gender and salary.
(b) LCD post hoc test indicates that there is significant difference between and within the

groups, so the null hypothesis stating on significant difference is therefore rejected.

Conclusion
Faculty members having higher degree have high salary as well.

Histogram

Report # 4

1. Student expected grade 2. Student grade point average

Variables to analyze

Statistical Tool

Paired sample t-test

Problem Statement
Is there any significant difference between student expected grade and student grade point average?

Null Hypothesis
There is no significant difference between student expected grade and student grade point average.

Statistics to be used
Since both variables are of the same group or individual, therefore Paired sample t-test will be used to compare the significant difference between the student expected grade and grade point
Paired Samples Test Paired Differences Mean Paired Samples Correlations Std. Deviation Std. Error Mean N Pair 1 EXPECTED GRADE IN COURSE - GRADE IN COURSE .493 1.110 Pair 1 EXPECTED STUDENTS GRADE POINT & STUDENTS GRADE POINT AVERAGE AVERAGE .030 1358 95% Confidence Interval of the Difference Lower Correlation Upper Sig. .434 .357 .552 16.358 .000 1357 .000 df Sig. (2tailed) t

average.

Findings

(a) The correlation between the expected grade and grade point average indicates low level

of correlation ( r = .357, sig = .000) the use of the paired sample t-test is, therefore justified.
(b) The paired sample t-test indicates that there is significant difference between expected

grade and grade point average of the students, so the null hypothesis claiming no significant difference is therefore rejected.

Conclusion
Most of the Students did not get the grades they expected.

Histigram

Report # 5

Variables to analyze

1. Faculty salary 2. Faculty rank

Statistical Tool

Pearson Correlation

Problem Statement
Is there any significant correlation between faculty rank and salary?

Null Hypothesis
There is no significant correlation between faculty rank and salary.

Statistics to be used
Since both variables represent ratio level of measurement, therefore Pearson correlation will be used to find out the correlation.

Correlations

FACULYT SALARY FACULYT SALARY Pearson Correlation Sig. (2-tailed) N FACULTY RANK Pearson Correlation Sig. (2-tailed) N **. Correlation is significant at the 0.01 level (2-tailed). 1428 .715** .000 1428 1

FACULTY RANK .715** .000 1428 1

1428

Findings

The Pearson correlation between faculty rank and salary indicates that a high positive correlation ( r = .715, sig = .000) so the null hypothesis claiming no significant correlation is therefore rejected.

Conclusion
Salary depends upon the rank in faculty, those who have higher ranks get higher salary.

Histigram

S-ar putea să vă placă și