Sunteți pe pagina 1din 45

Introduction

Factor analysis is a statistical method used to describe variability among observed,


correlated variables in terms of a potentially lower number of unobserved variables
called factors. For example, it is possible that variations in six observed variables mainly reflect
the variations in two unobserved (underlying) variables. Factor analysis searches for such joint
variations in response to unobserved latent variables. The observed variables are modeled
as linear combinations of the potential factors, plus "error" terms. Factor analysis aims to find
independent latent variables. Followers of factor analytic methods believe that the information
gained about the interdependencies between observed variables can be used later to reduce the
set of variables in a dataset. Factor analysis is not used to any significant degree in physics,
biology and chemistry but is used very heavily in psychometrics personality theories
marketing, product management, and operations research. Users of factor analysis believe that it
helps to deal with data sets where there are large numbers of observed variables that are thought
to reflect a smaller number of underlying/latent variables.

Example

Suppose a psychologist has the hypothesis that there are two kinds of intelligence, "verbal
intelligence" and "mathematical intelligence", neither of which is directly observed. Evidence for
the hypothesis is sought in the examination scores from each of 10 different academic fields of
1000 students. If each student is chosen randomly from a large population, then each student's 10
scores are random variables. The psychologist's hypothesis may say that for each of the 10
academic fields, the score averaged over the group of all students who share some common pair
of values for verbal and mathematical "intelligences" is some constant times their level of verbal
intelligence plus another constant times their level of mathematical intelligence, i.e., it is a
combination of those two "factors". The numbers for a particular subject, by which the two kinds
of intelligence are multiplied to obtain the expected score, are posited by the hypothesis to be the
same for all intelligence level pairs, and are called "factor loading" for this subject. For
example, the hypothesis may hold that the average student's aptitude in the field of astronomy is

{10 the student's verbal intelligence} + {6 the student's mathematical intelligence}.

The numbers 10 and 6 are the factor loadings associated with astronomy. Other academic
subjects may have different factor loadings.

Two students having identical degrees of verbal intelligence and identical degrees of
mathematical intelligence may have different aptitudes in astronomy because individual
aptitudes differ from average aptitudes. That difference is called the "error" a statistical term
that means the amount by which an individual differs from what is average for his or her levels
of intelligence.
Types of Factor Analysis

Exploratory factor analysis (EFA) is used to identify complex interrelationships among items
and group items that are part of unified concepts. The researcher makes no a priori assumptions
about relationships among factors.

Confirmatory factor analysis (CFA) is a more complex approach that tests the hypothesis that
the items are associated with specific factors. CFA uses structural equation modeling to test a
measurement model whereby loading on the factors allows for evaluation of relationships
between observed variables and unobserved variables. Structural equation modeling approaches
can accommodate measurement error, and are less restrictive than least-squares estimation.
Hypothesized models are tested against actual data, and the analysis would demonstrate
loadings of observed variables on the latent variables (factors), as well as the correlation between
the latent variables.

Types of Factoring

Principal component analysis (PCA) is a widely used method for factor extraction, which is
the first phase of EFA. Factor weights are computed to extract the maximum possible variance,
with successive factoring continuing until there is no further meaningful variance left. The factor
model must then be rotated for analysis.

Canonical factor analysis, also called Rao's canonical factoring, is a different method of
computing the same model as PCA, which uses the principal axis method. Canonical factor
analysis seeks factors which have the highest canonical correlation with the observed variables.
Canonical factor analysis is unaffected by arbitrary rescaling of the data.

Common factor analysis, also called principal factor analysis (PFA) or principal axis factoring
(PAF), seeks the least number of factors which can account for the common variance
(correlation) of a set of variables.

Image factoring is based on the correlation matrix of predicted variables rather than actual
variables, where each variable is predicted from the others using multiple regressions.

Alpha factoring is based on maximizing the reliability of factors, assuming variables are
randomly sampled from a universe of variables. All other methods assume cases to be sampled
and variables fixed.
OBJECTIVE-1
To measure various aspects of students anxiety towards learning SPSS. The data comprises of
responses obtained from students through the means of a questionnaire (secondary source) which
consists of 23 questions/variables. The questions were followed by a 5-point Likert Scale
ranging from Strongly Disagree to Strongly Agree.

Aim: To know whether anxiety about SPSS could be broken down into specific forms of anxiety.
In other words, are there other traits that might contribute towards anxiety about SPSS?

Solution:

The factor analysis was run on the available data using SPSS. Following are the steps to it:

Load the data file into the SPSS software.


Mark the variables as ordinal since responses are on Likert Scale.
Go to Analyze Dimension Reduction
Select the variables to be analyzed
Go to the Descriptive in the dialog box and select univariate solution, KMO and Bartletts
Sphericity test.
Go to Extraction and select the method of factor analysis along with scree plot.
Choose the method of rotation as varimax (orthogonal rotation)
Finally run the simulation and you will get a host of tables to interpret. Well look at them in
the next section.
DATA FILES
Following are the important tables that were obtained from the SPSS software:

1. Correlation Matrix

2. KMO & Bartletts Test of Sphericity


3. Factor Extraction Table

4. Communalities Table 5. Component Matrix


6. Rotated Component Matrix
Data Interpretation
In the following section we will take a look at each of the tables shown in the previous section.

1 Correlation Matrix: The table gives us the Pearson correlation coefficient between all
pairs of questions. We know that to do a factor analysis we need to have variables that
correlate fairly well, but not perfectly. Also, any variables that correlate with no others
should be eliminated. Therefore, we can use this correlation matrix to check the pattern
of relationships. First, scan the matrix for correlations greater than .3, and then look for
variables that only have a small number of correlations greater than this value. Then scan
the correlation coefficients themselves and look for any greater than 0.9. If any are found
then you should be aware that a problem could arise because of multicollinearity in the
data. In this case we can see that all the values are within range

2 KMO & Bartletts Test: KMO test is used to determine whether the data set obtained is
good enough to perform a factor analysis or not. As per the thumb rule a bare minimum
value of 0.5 and that values between 0.5 and 0.7 are mediocre, values between 0.7 and
0.8 are good, values between 0.8 and 0.9 are great and values above 0.9 are superb For
these data the value is 0.93, which falls into the range of being superb, so we should be
confident that the sample size is adequate for factor analysis. Bartletts measure tests the
null hypothesis that the original correlation matrix is an identity matrix. For factor
analysis to work we need some relationships between variables and if the R-matrix were
an identity matrix then all correlation coefficients would be zero. Therefore, we want this
test to be significant (i.e. have a significance value less than .05). A significant test tells
us that the R-matrix is not an identity matrix; therefore, there are some relationships
between the variables we hope to include in the analysis. For these data, Bartletts test is
highly significant (p < .001), and therefore factor analysis is appropriate.

3 Factor Extraction Table: This particular table lists the eigenvalues associated
with each linear component (factor) before extraction, after extraction and
after rotation. Before extraction, SPSS has identified 23 linear components
within the data set (we know that there should be as many eigenvectors as
there are variables and so there will be as many factors as variables). The
eigenvalues associated with each factor represent the variance explained by
that particular linear component and SPSS also displays the eigenvalue in
terms of the percentage of variance explained (so, factor 1 explains 31.696%
of total variance). It should be clear that the first few factors explain
relatively large amounts of variance (especially factor 1) whereas subsequent
factors explain only small amounts of variance. SPSS then extracts all factors
with eigenvalues greater than 1, which leaves us with four factors.

4 Table of Communalities: The communalities in the column labeled Extraction


reflect the common variance in the data structure. So, for example, we can
say that 43.5% of the variance associated with question 1 is common, or
shared variance. Another way to look at these communalities is in terms of
the proportion of variance explained by the underlying factors. After
extraction some of the factors are discarded and so some information is lost.
The amount of variance in each variable that can be explained by the
retained factors is represented by the communalities after extraction.

5 Rotated Component Matrix: This table shows the rotated component matrix
(also called the rotated factor matrix in factor analysis) which is a matrix of
the factor loadings for each variable onto each factor. Basically, it tells you
how each variable is related to the factors obtained.

Conclusion
What we can conclude from the above tables is that the data obtained was good enough to run
the factor analysis and that the variables are fairly correlated with each other for us to conduct
the analysis. The rotated matrix tells us the loadings of each factor on the variables and hence
these loadings can be used to find the underlying constructs in the real world. The questions that
load highly on factor 1 seem to all relate to using computers or SPSS. Therefore we might label
this factor fear of computers. The questions that load highly on factor 2 all seem to relate to
different aspects of statistics; therefore, we might label this factor fear of statistics. The three
questions that load highly on factor 3 all seem to relate to mathematics; therefore, we might label
this factor fear of mathematics. Finally, the questions that load highly on factor 4 all contain
some component of social evaluation from friends; therefore, we might label this factor peer
evaluation. This analysis seems to reveal that the initial questionnaire, in reality, is composed of
four subscales: fear of computers, fear of statistics, fear of maths, and fear of negative peer
evaluation.

Managerial Implications
As a researcher, we can take the above mentioned data & results and use them to find better and
more effective ways to teach SPSS to students. If what we concluded above was true then we
need to realize that the reason why some students are unable to learn SPSS is not because the
software in itself is tough but there could be other underlying factors like discussed above which
are preventing the students to learn the software. So what we need to realize is that instead of
making the students practice more of SPSS, we could focus teaching statistics first or
mathematics or computers, wherever the problem lies with a particular person.

OBJECTIVE 2
To understand the personality of a person. The data comprises of responses obtained from 459
respondents on 44 personality attributes. The questions were followed by a 5-point Likert Scale
ranging from Strongly Disagree to Strongly Agree.

Aim: To understand what broad factors can be used to describe the personality of a person.

Solution:

The factor analysis was run on the available data using SPSS. Following are the steps to it:

1. Load the data file into the SPSS software.


2. Mark the variables as ordinal since responses are on Likert Scale.
3. Go to Analyze Dimension ReductionFactor.
4. Select the variables to be analyzed.
5. Go to the Descriptive in the dialog box and select Univariate solution, Coefficients, and
KMO and Bartletts test of Sphericity.
6. Go to Extraction and select the method of factor analysis as Principal Components. In
Display, select Unrotated factor solution and scree plot. Select Extract based on Eigen
Values greater than 1.
7. In Rotation, choose the method of rotation as varimax (orthogonal rotation).
8. In Scores, select Save as Variables Bartlett.
9. In Options, select Execute cases listwise and check Sorted by size.
10. Finally run the simulation. A number of tables will be displayed, which are analyzed as
below:

1. KMO and Bartletts Test of Sphericity


KMO and Bartlett's Test
Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .841
Bartlett's Test of Sphericity Approx. Chi-Square 6730.363
df 946
Sig. 0.000

2. Correlation Matrix

finds does a depresse origina ditractabl sophisticated in


talkative fault thorough job d l reserved e art & music
Correlatio talkative 1.000 .069 .204 -.265 .290 -.452 .023 -.015
n
finds fault .069 1.000 -.012 .119 -.115 .067 .193 -.010
does a thorough job .204 -.012 1.000 -.111 .212 .042 -.306 .012

depressed -.265 .119 -.111 1.000 -.236 .256 .172 -.077


original .290 -.115 .212 -.236 1.000 -.170 -.198 .210
reserved -.452 .067 .042 .256 -.170 1.000 .021 .063
helpful .280 -.264 .338 -.122 .279 -.124 -.196 .092
careless -.096 .081 -.387 .206 -.215 .003 .428 -.062
relaxed .071 -.153 .093 -.407 .204 -.121 -.165 .088
curious .206 .102 .196 -.058 .250 -.007 -.047 .078
full of energy .133 .078 .065 -.291 .162 -.178 .004 .124
starts quarrels -.110 .268 -.144 .141 -.007 -.007 .171 -.056
reliable .283 -.103 .549 -.206 .302 -.025 -.276 .023
tense -.085 .171 -.001 .434 -.202 .204 .209 -.056
ingenious -.063 .015 .160 .057 .283 .064 -.114 .222
generates .394 .080 .115 -.152 .330 -.285 .015 .066
enthusiasm in others

forgiving .170 -.190 .098 -.102 .072 -.056 .022 .024


disorganized -.119 -.089 -.375 .066 -.152 -.044 .366 -.023
worries -.048 .028 .095 .345 -.078 .155 .018 .052
imaginative .272 .081 .110 -.056 .338 -.106 .021 .138
quiet -.612 -.071 -.019 .229 -.275 .618 -.008 .027
trusting .330 -.198 .202 -.222 .173 -.124 -.102 .070
lazy -.109 .161 -.430 .249 -.218 .026 .423 -.004
emotionally stable .124 -.077 .074 -.423 .216 -.151 -.119 .061

inventive .138 .060 .053 -.100 .432 -.022 -.010 .232


assertive .343 .188 .110 -.189 .280 -.313 -.021 .116
cold and aloof -.264 .266 -.112 .223 -.205 .244 .140 -.002
perseveres .209 -.078 .490 -.198 .272 .000 -.304 .014
moody -.003 .219 -.017 .349 -.095 .140 .190 -.066
values artistic .029 -.030 .074 -.044 .226 .044 .004 .462
experiences
shy -.398 -.085 .002 .179 -.194 .510 .070 -.022
considerate .303 -.244 .314 -.088 .158 -.080 -.189 .082
efficient .058 .101 .443 -.104 .062 .060 -.218 .018
calm in tense -.032 -.087 .070 -.216 .090 -.069 -.117 -.025
situations
prefers routine work -.208 .022 -.020 .066 -.180 .116 -.071 -.019

outgoing .579 .096 .072 -.314 .214 -.468 .024 .092


sometimes rude -.030 .263 -.254 .110 -.100 -.104 .263 -.061
sticks to plans .168 .043 .379 -.153 .238 -.009 -.258 .078
nervous -.237 .048 -.068 .347 -.242 .377 .209 -.027
reflective .043 -.090 .137 -.038 .383 -.031 -.109 .233
few artistic interests -.091 .033 -.081 .086 -.182 .088 .083 -.327

co-operative .365 -.224 .301 -.174 .217 -.146 -.179 .050


ditractable .023 .193 -.306 .172 -.198 .021 1.000 -.046
sophisticated in art & -.015 -.010 .012 -.077 .210 .063 -.046 1.000
music

3. Total Variance Explained

Total Variance Explained

Initial Eigenvalues Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings
% of % of Cumulative % of
Component Total Variance Cumulative % Total Variance % Total Variance Cumulative %
1 7.280 16.545 16.545 7.280 16.545 16.545 4.388 9.972 9.972
2 4.192 9.527 26.072 4.192 9.527 26.072 3.716 8.446 18.419
3 3.041 6.910 32.982 3.041 6.910 32.982 3.543 8.052 26.471
4 2.548 5.790 38.772 2.548 5.790 38.772 2.908 6.610 33.081
5 2.253 5.121 43.893 2.253 5.121 43.893 2.295 5.215 38.295
6 1.594 3.624 47.517 1.594 3.624 47.517 2.115 4.807 43.102
7 1.318 2.996 50.513 1.318 2.996 50.513 2.111 4.798 47.900
8 1.224 2.781 53.294 1.224 2.781 53.294 1.880 4.272 52.172
9 1.154 2.622 55.916 1.154 2.622 55.916 1.516 3.445 55.617
10 1.044 2.373 58.289 1.044 2.373 58.289 1.176 2.672 58.289

4. Communalities Table

Communalities

Initial Extraction
talkative 1.000 .749
finds fault 1.000 .564
does a thorough job 1.000 .581
depressed 1.000 .608
original 1.000 .593
reserved 1.000 .661
helpful 1.000 .629
careless 1.000 .533
relaxed 1.000 .629
curious 1.000 .504
full of energy 1.000 .537
starts quarrels 1.000 .445
reliable 1.000 .631
tense 1.000 .605
ingenious 1.000 .614
generates 1.000 .624
enthusiasm in others
forgiving 1.000 .351
disorganized 1.000 .550
worries 1.000 .581
imaginative 1.000 .510
quiet 1.000 .744
trusting 1.000 .519
lazy 1.000 .571
emotionally stable 1.000 .558
inventive 1.000 .591
assertive 1.000 .571
cold and aloof 1.000 .517
sometimes rude 1.000 .553
sticks to plans 1.000 .476
nervous 1.000 .579
reflective 1.000 .595
few artistic interests 1.000 .614
co-operative 1.000 .520
ditractable 1.000 .531
sophisticated in art & 1.000 .659
music
Extraction Method: Principal Component Analysis.

5. Components Matrix

Component
1 2 3 4 5 6 7 8 9 10
original .568 -.083 .277 .226 -.119 -.179 .197 .211 .053 -.059
lazy -.562 -.388 .123 -.017 -.203 .047 .101 .025 .107 .154
perseveres .559 .390 .031 -.033 .221 -.029 .034 -.030 .124 -.098
reliable .556 .410 .036 -.101 .124 -.012 .093 -.047 .197 -.277
talkative .551 -.285 .278 -.490 .035 -.017 .041 -.076 .181 -.067
helpful .532 .299 -.001 -.216 -.193 .042 .108 .082 .222 .323
co-operative .530 .259 .064 -.291 -.199 .153 .015 .093 .103 .003
considerate .517 .344 .036 -.262 -.189 .252 .063 -.028 .043 .364
depressed -.516 .262 .329 -.089 .045 -.070 .104 .034 -.105 .357
does a thorough job .510 .434 .065 -.002 .316 .053 .100 -.056 .105 -.038
outgoing .494 -.460 .162 -.285 .050 .154 -.209 -.050 .039 -.052
sticks to plans .481 .241 .058 .098 .295 .123 -.222 .080 .058 -.110
careless -.468 -.340 .200 -.079 -.225 .303 .039 .057 .000 -.077
nervous -.465 .386 .290 -.082 -.001 .309 -.028 .015 -.141 -.085
generates enthusiasm in .459 -.334 .327 -.018 .100 .278 .007 -.026 -.299 .131
others

trusting .448 .163 -.006 -.343 -.266 .298 -.031 .047 .107 .009
emotionally stable .419 -.310 -.301 .309 -.034 .196 .060 .070 .172 -.151
cold and aloof -.416 .005 .118 .255 .305 .162 -.028 .025 .380 .028
ditractable -.404 -.310 .281 -.142 -.162 .264 .054 -.244 .111 .032
relaxed .402 -.244 -.393 .363 -.068 .242 .112 .166 .136 .013
starts quarrels -.294 -.263 .185 .104 .279 .100 -.015 .249 .233 .201
quiet -.435 .552 -.143 .399 -.082 .228 .047 -.088 -.019 -.039
reserved -.369 .526 -.023 .380 .015 .200 .073 -.233 -.040 -.054
shy -.364 .495 -.047 .245 -.186 .277 .109 -.169 .210 -.079
worries -.189 .475 .419 -.211 -.055 -.046 -.110 .244 -.092 -.116
assertive .386 -.465 .229 .102 .328 .108 -.035 .042 -.034 .139
sometimes rude -.386 -.417 .245 .103 .235 .020 .107 .085 .288 -.043
imaginative .301 -.052 .506 .125 -.144 -.060 .230 -.045 -.015 -.256
tense -.379 .310 .499 -.241 .117 .070 -.006 .003 -.176 .093
moody -.346 .123 .448 -.088 .172 .085 .159 -.069 .386 .061
curious .313 -.006 .444 .165 -.002 .101 .156 -.185 -.186 -.280
inventive .302 -.215 .364 .349 -.099 -.118 .232 -.248 -.239 .059
calm in tense situations .299 -.263 -.358 .364 .071 .208 .213 -.136 -.041 .243
reflective .333 .141 .343 .355 -.278 -.035 .200 .315 .025 .055
finds fault -.185 -.195 .354 .081 .539 -.019 -.130 -.190 .112 -.053
efficient .390 .296 -.031 .123 .479 .126 .038 -.174 -.177 .308
disorganized -.382 -.370 .059 -.072 -.457 .185 .050 .043 -.012 -.102
forgiving .252 -.012 -.015 -.114 -.275 .397 .038 -.199 -.019 .007
full of energy .353 -.306 .159 .172 .147 .389 -.206 -.014 -.207 -.077
sophisticated in art & .185 .024 .267 .388 -.305 -.078 -.503 -.009 .088 .207
music
values artistic experiences .206 .079 .327 .392 -.325 -.013 -.429 -.104 .171 .002
prefers routine work -.185 .158 -.133 -.007 .160 .369 -.420 .369 -.129 -.105
few artistic interests -.293 -.053 -.203 -.194 .281 .311 .351 .342 -.137 -.107
ingenious .166 .160 .366 .403 -.047 -.045 .132 .477 -.108 .063
Extraction Method: Principal Component Analysis.
a. 10 components extracted.

6. Rotated Components Matrix

Rotated Component Matrixa

Component
1 2 3 4 5 6 7 8 9 10
disorganized -.723 .027 -.011 .045 .026 -.016 -.001 -.019 .025 .149
does a .697 .042 .005 .246 .112 .055 .088 -.049 -.063 .063
thorough job
lazy -.671 .016 .074 -.123 -.093 .268 -.031 -.040 -.079 -.099
perseveres .667 -.039 -.028 .247 .046 -.029 .091 .021 -.054 .143
careless -.654 .051 .112 -.003 .146 .198 -.025 -.091 .106 .101
efficient .626 .073 -.040 .076 .286 .029 -.053 -.061 -.034 -.404
reliable .624 -.012 -.024 .311 .018 -.052 .085 -.021 -.089 .355
sticks to .568 -.068 -.083 .115 .189 .050 .081 .136 .235 .095
plans
ditractable -.550 .041 .163 .078 .197 .299 -.211 .006 -.134 .057
quiet .007 .841 .081 -.074 -.085 .008 .011 .018 .123 -.045
reserved .088 .785 .137 -.101 .047 .053 -.040 .034 -.004 -.026
shy -.052 .752 .080 .140 -.110 .134 -.038 .046 -.004 .118
talkative .142 -.658 .019 .388 .185 .081 -.056 -.015 -.207 .241
outgoing .018 -.619 -.163 .239 .350 .041 -.125 .112 .054 .131
assertive .114 -.463 -.233 -.059 .428 .249 .112 .037 -.003 -.161
relaxed .056 -.003 -.747 .130 .058 -.020 .183 -.023 .108 -.040
tense -.058 .141 .724 .012 .125 .159 .023 -.080 .041 -.087
emotionally .066 -.085 -.701 .072 .125 .008 .107 .020 .061 .138
stable
worries .074 .119 .659 .085 -.074 -.018 .225 .029 .193 .157
calm in .061 .061 -.644 .033 .205 -.016 .000 -.073 -.129 -.322
tense
situations
depressed -.172 .226 .584 -.067 -.111 .190 .074 -.067 -.079 -.342
nervous -.139 .434 .528 .048 .134 .109 -.016 -.102 .217 .051
considerate .285 -.039 .038 .730 .052 -.133 .064 .056 -.050 -.215
trusting .116 -.131 -.032 .658 .052 -.150 .000 .021 .099 .136
helpful .314 -.126 -.043 .644 -.166 -.065 .168 .082 -.118 -.131
co-operative .272 -.153 .022 .599 .013 -.158 .126 .037 .030 .140
forgiving -.092 .060 -.141 .461 .271 -.125 -.106 .034 -.030 .061
generates .035 -.375 -.075 .170 .632 -.014 .145 .013 -.010 -.163
enthusiasm
in others
full of energy .051 -.191 -.234 .027 .610 .032 .039 .113 .235 -.007
curious .137 .001 .072 .018 .543 -.043 .239 .065 -.231 .263
cold and -.054 .271 .012 -.208 -.061 .615 -.045 .019 .115 -.005
aloof
moody -.068 .142 .375 .064 -.035 .598 .008 -.039 -.160 .085
sometimes -.316 -.107 -.009 -.302 .023 .576 .035 -.097 -.040 .074
rude
starts -.183 -.100 .012 -.174 -.028 .547 .101 -.053 .161 -.180
quarrels
finds fault .112 -.121 .179 -.377 .270 .504 -.177 .051 -.038 .019
ingenious .114 .065 .066 -.052 .095 .047 .746 .088 .090 -.081
reflective .076 .053 -.051 .175 .076 -.020 .707 .190 -.104 .025
original .214 -.267 -.193 .105 .127 -.046 .568 .142 -.220 .135
imaginative .041 -.102 .085 .065 .330 .014 .396 .103 -.319 .329
sophisticate -.015 .002 -.030 .032 .062 -.009 .208 .769 .091 -.105
d in art &
music
values .017 .093 -.029 .076 .132 .030 .191 .744 .023 .123
artistic
experiences
few artistic -.130 .077 .000 -.044 .002 .119 .022 -.705 .277 -.034
interests
prefers .016 .134 .079 -.034 .038 .015 -.054 -.036 .740 -.016
routine work
inventive -.003 -.072 -.095 -.108 .453 -.092 .306 .184 -.466 -.078

Extraction Method: Principal Component Analysis.


Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 16 iterations.

Data Interpretation

1. Correlation Matrix

This n*n matrix provides the correlation of one variable with every other variable. Each
variable has a correlation of 1 with itself, hence the diagonal elements are all unity. The value
of correlation can range from -1 to +1 (both inclusive) i.e. perfect negative correlation to
perfect positive correlation.

2. KMO and Bartletts Test

The KMO test gives a number that indicates the degree of correlation among the variables. A
value greater than 0.5 means that the variables are sufficiently correlated so as to perform a
factor analysis. In this case, the value for the KMO test is 0.841, indicating that the variables
are sufficiently correlated.
The Bartlett test gives a value of the significance. For factor analysis, the null hypothesis is
that the correlation matrix should be an identity matrix, i.e. any variable should not be
correlated with any other variable. A value of significance less than 0.05 rejects the null
hypothesis, so that the alternate hypothesis, that the variables are correlated, holds good. In
our case as seen above, the value of significance is less than 0.05, hence the variables are
correlated with each other, and factor analysis can be applied.

3. Total Variance Explained

This table displays the Eigen values for all factors generated. In our test, we had set a
condition that the Eigen values should be greater than 1. Eigen values greater than 1 indicate
factors that explain a higher level of variance in the values than factors which have Eigen
values less than 1, hence factors with Eigen values greater than 1 are considered. In this case,
total 10 factors are generated, having Eigen values greater than 1.

4. Communalities table

The communalities in the column labeled Extraction reflect the common variance in the data
structure. So, for example, we can say that 74.9% of the variance associated with variable
talkative is common, or shared variance. It also represents the amount of variance in each
variable that can be explained by the retained factors.

5. Rotated Components Matrix

This table tells us how much each variable is related to each of the factors generated. A cutoff
value can be chosen for each factor, and all variable that have the absolute value of
correlation greater than or equal to the cutoff variable, will form the part of the factor, and
similarly for the other factors.

CONCLUSION

To conclude, we can see that the responses were adequate to run a factor analysis. The factor
analysis has brought out 10 sub-scales that can be used to describe the personality of a person.
These are:

Commitment to work
Gregariousness
Emotional stability
Teamwork attitude
Enthusiasm
Mood
Creativity
Artistic skills
Uncertainty avoidance

The above represent the broad categories that can help define personality. Our analysis also
shows that these are formed of sufficiently correlated variables that provide a comprehensive
view of the above categories.

MANAGERIAL IMPLICATIONS

The use of factor analysis to such a data set can help managers reduce the large number of
variables to a smaller set of factors or broad categories that can help them interpret the data
better and make informed results. As in this case, the factors / categories formed represent the
broad characteristics of the personality of a person from both a personal as well as professional
perspective. This can help managers conduct the required personality trainings and workshops
for their employees.

APPENDIX

Complete output tables are as in the attached excel workbook.

Personality Survey
Output Final.xlsx

OBJECTIVE 03

To determine the different aspects of psychology on the basis of:

Experimental Psychology

Statistics

Social Psychology

Development

Personality
The test conducted represents effect of each variable on the psychology of the
individual.

The test was conducted on basis of five variables.

40 respondents participated in the survey floated.

Solution:

The factor analysis was run on the available data using SPSS. Following are the
steps to it:

Load the data file into the SPSS software.


Mark the variables as ordinal since responses are on Likert Scale.
Go to Analyze Dimension Reduction
Select the variables to be analyzed
Go to the Descriptive in the dialog box and select univariate solution, KMO and
Bartletts Sphericity test.
Go to Extraction and select the method of factor analysis along with scree plot.
Choose the method of rotation as varimax (orthogonal rotation)
Finally run the simulation and you will get a host of tables to interpret. Well
look at them in the next section.

DATA FILES
Correlation Matrix

Total Variance Explained

KMO & Bartletts Test


Communalities

Initial Extraction

Experimental
1.000 .711
Psychology
Statistics 1.000 .663
Social Psychology 1.000 .764
Developmental 1.000 .804
Personality 1.000 .745

Extraction Method: Principal


Component Analysis.
Scree Plot

Component Matrix
DATA INTERPRETATION

KMO & Bartletts Test: KMO test is used to determine whether the data set
obtained is good enough to perform a factor analysis or not. As per the thumb rule
a bare minimum value of 0.5 and that values between 0.5 and 0.7 are mediocre,
values between 0.7 and 0.8 are good, values between 0.8 and 0.9 are great and
values above 0.9 are superb For these data the value is 0.729, which falls into the
range of being fairly good, so we should be confident that the sample size is
adequate for factor analysis.

Bartletts measure tests the null hypothesis that the original correlation matrix is an
identity matrix. Therefore, we want this test to be significant (i.e. have a
significance value less than .05). For these data, Bartletts test is highly significant
(p < .001), and therefore factor analysis is appropriate.
Table of Communalities: The communalities in the column labeled Extraction
reflect the common variance in the data structure. Since all the aspects having an
extraction factor of more than 65% and mostly > 70% we can state that the
variance is explained in the overall model.

Rotated Component Matrix: This table shows the rotated component matrix
(also called the rotated factor matrix in factor analysis) which is a matrix of the
factor loadings for each variable onto each factor. Basically, it tells you how each
variable is related to the factors obtained.

Conclusion

What we can conclude from the above tables is that the data obtained was good
enough to run the factor analysis and that the variables are fairly correlated with
each other for us to conduct the analysis. The rotated matrix tells us the loadings of
each factor on the variables and hence these loadings can be used to find the
underlying constructs in the real world. Also we can say there are two major
factors which suffices describing all the five variables. Looking down the rotated
matrix we can exclaim the first factor explains the first two variables while the
second factor explains the remaining three.

Managerial Implication

We can conclude that the psychological aspect of students can be broadly


categorized under two aspects:
Academics
Social interaction of a person

The correlation factor will be high between the two parameters if the academic
curriculum has a higher reflection of social factors. In this case it would help the
students relate the theory more to the practical aspects of daily life and would help
the students opting for the course to have better insights about the same.
OBJECTIVE 4

To understand the common factors that determines the personality of a person. The data
comprises of responses obtained from 165 respondents on 5 personality attributes.

Aim: To understand what common factors can be used to describe the personality of a person.

Solution:

The factor analysis was run on the available data using SPSS. Following are the steps to it:

1. Load the data file into the SPSS software.


2. Mark the variables as ordinal since responses are on Likert Scale.
3. Go to Analyze Dimension ReductionFactor.
4. Select the variables to be analyzed.
5. Go to the Descriptive in the dialog box and select Univariate solution, Coefficients, and
KMO and Bartletts test of Sphericity.
6. Go to Extraction and select the method of factor analysis as Principal Components. In
Display, select Unrotated factor solution and scree plot. Select Extract based on Eigen
Values greater than 1.
7. In Rotation, choose the method of rotation as varimax (orthogonal rotation).
8. In Scores, select Save as Variables Bartlett.
9. In Options, select Execute cases listwise and check Sorted by size.
10. Finally run the simulation. A number of tables will be displayed, which are analyzed as
below:

Correlation Matrixa

sociable warmth kind sensitivity intelligence

Correlation sociable 1.000 .624 .460 .455 .590

warmth .624 1.000 .722 .653 .510

kind .460 .722 1.000 .714 .482

sensitivity .455 .653 .714 1.000 .501

intelligence .590 .510 .482 .501 1.000

a. Determinant = .075
Interpretation
Correlation Matrix: This table tells us the Pearsons coefficient of correlation between
the pairs of variables in contention. To conduct a factor analysis, the variables need to be
correlated fairly but not perfectly. We can use this correlation matrix to check the pattern
of relationships. Thereby, variables with no correlation or near perfect correlation should
be eliminated First, scan the matrix for correlations greater than .3, and then look for
variables that only have a small number of correlations greater than this value. Then scan
the correlation coefficients themselves and look for any greater than 0.9. If any are found
then you should be aware that a problem could arise because of multicollinearity in the
data. In this case all the values are within the permissible range of 0.455 to 0.722.

KMO & Bartletts Test: KMO test is used to determine whether the data set obtained is
good enough to perform a factor analysis or not. For these data the value is 0.809, which
falls into the range of being great, so we should be confident that the sample size is
adequate for factor analysis. Bartletts measure tests the null hypothesis that the original
correlation matrix is an identity matrix. For factor analysis to work we need some
relationships between variables and if the R-matrix were an identity matrix then all
correlation coefficients would be zero. Therefore, we want this test to be significant (i.e.
have a significance value less than .05). A significant test tells us that the R-matrix is not
an identity matrix; therefore, there are some relationships between the variables we hope
to include in the analysis.

Total Variance explained: This tests the null hypothesis that the correlation matrix is an
identity matrix. An identity matrix is matrix in which all of the diagonal elements are 1
and all off diagonal elements are 0. You want to reject this null hypothesis.

Communalities Table: The communalities in the column labeled Extraction reflect the
common variance in the data structure. One way to look at these communalities is in
terms of the proportion of variance explained by the underlying factors. After extraction
some of the factors are discarded and so some information is lost. Eigenvalues are the
variances of the factors. Because we conducted our factor analysis on the correlation
matrix, the variables are standardized, which means that the each variable has a variance
of 1, and the total variance is equal to the number of variables used in the analysis, The
amount of variance in each variable that can be explained by the retained factors is
represented by the communalities after extraction. Only one component has an eigen
value exceeding 1 which explaines a variance of around 66 %.

Rotated Components Table: This table tells us how much each variable is related to each of
the factors generated. A cutoff value can be chosen for each factor, and all variable that have
the absolute value of correlation greater than or equal to the cutoff variable, will form the
part of the factor, and similarly for the other factors.

Conclusion

Managerial Implications

OBJECTIVE 5

AIM- To determine the satisfaction levels of staff from an educational institution with branches
in a number of locations.

Staffs were asked to complete a short questionnaire containing questions about their opinion of
various aspects of the organization and the treatment they have received as employees.

The test was conducted on the basis of 10 variables.

Solution:
The factor analysis was run on the available data using SPSS. Following are the steps to it:

1. Load the data file into the SPSS software.


2. Mark the variables as ordinal since responses are on Likert Scale.
3. Go to Analyze Dimension ReductionFactor.
4. Select the variables to be analyzed.
5. Go to the Descriptive in the dialog box and select Univariate solution, Coefficients, and
KMO and Bartletts test of Sphericity.
6. Go to Extraction and select the method of factor analysis as Principal Components. In
Display, select Unrotated factor solution and scree plot. Select Extract based on Eigen
Values greater than 1.
7. In Rotation, choose the method of rotation as varimax (orthogonal rotation).
8. In Scores, select Save as Variables Bartlett.
9. In Options, select Execute cases listwise and check Sorted by size.
10. Finally run the simulation. A number of tables will be displayed, which are analyzed as
below:

Correlation Matrix
q1a q2a q3a q4a q5a q6a q7a q8a q9a q10a

Correlation q1a 1.000 .374 .244 .188 .275 .340 .264 .151 .171 .195

q2a .374 1.000 .346 .272 .289 .320 .379 .125 .238 .347

q3a .244 .346 1.000 .342 .412 .411 .421 .198 .369 .325

q4a .188 .272 .342 1.000 .676 .540 .431 .133 .585 .471

q5a .275 .289 .412 .676 1.000 .548 .441 .233 .591 .539

q6a .340 .320 .411 .540 .548 1.000 .491 .219 .412 .436

q7a .264 .379 .421 .431 .441 .491 1.000 .178 .385 .411

q8a .151 .125 .198 .133 .233 .219 .178 1.000 .078 .220

q9a .171 .238 .369 .585 .591 .412 .385 .078 1.000 .408

q10a .195 .347 .325 .471 .539 .436 .411 .220 .408 1.000

KMO and Bartletts Test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .885

Bartlett's Test of Approx. Chi-Square 1592.443


Sphericity

df 45

Sig. .000

Total Variance Explained

Componen Extraction Sums of Rotation Sums of


t Squared Loadings Squared Loadings
% of Cumulativ % of Cumulativ % of Cumulativ
Total Variance e% Total Variance e% Total Variance e%
1 4.24 42.465 42.465 4.24 42.465 42.465 3.36 33.612 33.612
7 7 1
2 1.13 11.339 53.804 1.13 11.339 53.804 2.01 20.192 53.804
4 4 9
3 .935 9.353 63.157
4 .732 7.320 70.477
5 .676 6.765 77.242
6 .600 5.995 83.237
7 .516 5.164 88.401
8 .481 4.807 93.208
9 .381 3.808 97.016
.298 2.984 100.000
10

Component Matrix
Rotated Component Matrix

Interpretation
Correlation Matrix: This table tells us the Pearsons coefficient of correlation between
the pairs of variables in contention. To conduct a factor analysis, the variables need to be
correlated fairly but not perfectly. We can use this correlation matrix to check the pattern
of relationships. Thereby, variables with no correlation or near perfect correlation should
be eliminated First, scan the matrix for correlations greater than .3, and then look for
variables that only have a small number of correlations greater than this value. Then scan
the correlation coefficients themselves and look for any greater than 0.9. If any are found
then you should be aware that a problem could arise because of multicollinearity in the
data.

KMO & Bartletts Test: KMO test is used to determine whether the data set obtained is
good enough to perform a factor analysis or not. For these data the value is 0.885, which
falls into the range of being great, so we should be confident that the sample size is
adequate for factor analysis. Bartletts measure tests the null hypothesis that the original
correlation matrix is an identity matrix. For factor analysis to work we need some
relationships between variables and if the R-matrix were an identity matrix then all
correlation coefficients would be zero. Therefore, we want this test to be significant (i.e.
have a significance value less than .05). A significant test tells us that the R-matrix is not
an identity matrix; therefore, there are some relationships between the variables we hope
to include in the analysis.

Total Variance explained: This tests the null hypothesis that the correlation matrix is an
identity matrix. An identity matrix is matrix in which all of the diagonal elements are 1
and all off diagonal elements are 0. You want to reject this null hypothesis.

Communalities Table: The communalities in the column labeled Extraction reflect the
common variance in the data structure. One way to look at these communalities is in
terms of the proportion of variance explained by the underlying factors. After extraction
some of the factors are discarded and so some information is lost. Eigenvalues are the
variances of the factors.

6 Rotated Component Matrix: This table shows the rotated component matrix
(also called the rotated factor matrix in factor analysis) which is a matrix of
the factor loadings for each variable onto each factor. Basically, it tells you
how each variable is related to the factors obtained.

CONCLUSION
To conclude, we can see that the responses were adequate to run a factor analysis. We found that
only two factors are made as a result of factor analysis.

Factor 1 has variables - q3a, q4a, q5a, q6a, q7a, q9a, q10a.

Factor 2 has variables q1a, q2a, q3a , q6a, q7a, q8a.

OBJECTIVE-6
To understand the necessary characteristics required for a lecturer to teach statistics. The data
comprises of responses obtained from 239 respondents on 28 personality attributes. The
questions were followed by a 5-point Likert Scale ranging from Strongly Disagree to Strongly
Agree.

Aim: To understand what broad factors can be used to recruit a lecturer for teaching statistics.

Solution:

The factor analysis was run on the available data using SPSS. Following are the steps to it:

1. Load the data file into the SPSS software.


2. Mark the variables as ordinal since responses are on Likert Scale.
3. Go to Analyze Dimension ReductionFactor.
4. Select the variables to be analyzed.
5. Go to the Descriptive in the dialog box and select Univariate solution, Coefficients, and
KMO and Bartletts test of Sphericity.
6. Go to Extraction and select the method of factor analysis as Principal Components. In
Display, select unrotated factor solution and scree plot. Select Extract based on Eigen
Values greater than 1.
7. In Rotation, choose the method of rotation as varimax (orthogonal rotation).
8. In Scores, select Save as Variables Bartlett.
9. In Options, select Execute cases listwise and check Sorted by size.
10. Finally run the simulation. A number of tables will be displayed, which are analyzed as
below:

1. Correlation Matrix
2. KMO and Bartletts test of Sphericity
3. Total variance explained

4. Scree Plot

5. Communalities
Communalities
Extraction
I once woke up in the
middle of a vegetable
patch hugging a turnip that
.646
I'd mistakenly dug up
thinking it was Roy's
largest root
If I had a big gun I'd shoot
all the students I have to .624
teach
I memorize probability
values for the F- .591
distribution
I worship at the shrine of
.589
Pearson
I still live with my mother
and have little personal .545
hygiene
Teaching others makes me
want to swallow a large
bottle of bleach because
.621
the pain of my burning
oesophagus would be light
relief in comparison
Helping others to
understand Sums of .486
Squares is a great feeling
I like control conditions .683
I calculate 3 ANOVAs in
my head before getting out .638
of bed every morning
I could spend all day
explaining statistics to .417
people
I like it when people tell
me I've helped them to .539
understand factor rotation
People fall asleep as soon
as I open my mouth to .297
speak
Designing experiments is
.531
fun
I'd rather think about
appropriate dependent
.709
variables than go to the
pub
I soil my pants with
excitement at the mere .511
mention of Factor Analysis
Thinking about whether to
use repeated or
.681
independent measures
thrills me
I enjoy sitting in the park
contemplating whether to
.705
use participant observation
in my next experiment
Standing in front of 300
people in no way makes
.514
me lose control of my
bowels
I like to help students .536
Passing on knowledge is
the greatest gift you can .477
bestow an individual
Thinking about Bonferroni
corrections gives me a .566
tingly feeling in my groin
I quiver with excitement
when thinking about
.766
designing my next
experiment
I often spend my spare
time talking to the
.587
pigeons ... and even they
die of boredom
I tried to build myself a
time machine so that I
could go back to the 1930s
and follow Fisher around .649
on my hands and knees
licking the floor on which
he'd just trodden
I love teaching .550
I spend lots of time helping
.599
students
I love teaching because
students have to pretend
.619
to like me or they'll get bad
marks
My cat is my only friend .538
Extraction Method: Principal
Component Analysis.

6. Component Matrix

Component Matrixa
Component
1 2 3 4 5
I like control conditions .803 .055 -.148 -.075 .086
I quiver with excitement
when thinking about
.791 -.197 -.191 -.255 .003
designing my next
experiment
I enjoy sitting in the park
contemplating whether to
.768 -.131 -.190 -.236 .082
use participant observation
in my next experiment
I calculate 3 ANOVAs in
my head before getting out .723 -.300 -.111 .109 -.027
of bed every morning
I once woke up in the
middle of a vegetable
patch hugging a turnip that
.684 -.341 -.086 .101 -.212
I'd mistakenly dug up
thinking it was Roy's
largest root
I like it when people tell
me I've helped them to .675 .259 .115 -.017 -.049
understand factor rotation
Thinking about Bonferroni
corrections gives me a .674 .097 -.247 -.065 -.192
tingly feeling in my groin
Designing experiments is
.673 -.019 -.210 -.171 -.058
fun
Thinking about whether to
use repeated or
.650 .092 -.027 -.497 .042
independent measures
thrills me
I'd rather think about
appropriate dependent
.614 -.146 .155 -.517 .138
variables than go to the
pub
I tried to build myself a
time machine so that I
could go back to the 1930s
and follow Fisher around .614 .157 -.128 .345 -.334
on my hands and knees
licking the floor on which
he'd just trodden
I worship at the shrine of
.600 .031 -.147 .452 -.045
Pearson
I memorize probability
values for the F- .584 .255 -.223 .313 -.192
distribution
I love teaching because
students have to pretend
.580 .383 .333 .125 .093
to like me or they'll get bad
marks
I soil my pants with
excitement at the mere .559 -.166 -.106 -.085 -.390
mention of Factor Analysis
I spend lots of time helping
.533 .502 .136 -.206 .039
students
Helping others to
understand Sums of .528 .394 -.154 .161 .051
Squares is a great feeling
I could spend all day
explaining statistics to .502 -.266 -.036 -.104 .287
people
I love teaching .501 .444 .159 .096 .260
Passing on knowledge is
the greatest gift you can .456 .382 .048 .186 .294
bestow an individual
I like to help students .188 .620 -.023 .098 .325
If I had a big gun I'd shoot
all the students I have to .373 -.543 -.011 .303 .312
teach
Standing in front of 300
people in no way makes
.421 -.523 .170 .108 .150
me lose control of my
bowels
Teaching others makes me
want to swallow a large
bottle of bleach because
.290 -.501 -.114 .260 .453
the pain of my burning
oesophagus would be light
relief in comparison
I still live with my mother
and have little personal .446 -.240 .524 -.054 -.102
hygiene
My cat is my only friend .458 -.025 .520 .082 -.224
People fall asleep as soon
as I open my mouth to .133 .067 .519 -.066 -.015
speak
I often spend my spare
time talking to the
.432 -.248 .518 .240 -.115
pigeons ... and even they
die of boredom
Extraction Method: Principal Component Analysis.
a. 5 components extracted.

7. Rotated Component matrix

Rotated Component Matrixa


Component
1 2 3 4 5
Thinking about whether to
use repeated or
.775 .077 .237 -.033 .130
independent measures
thrills me
I'd rather think about
appropriate dependent
.764 -.084 .111 .148 .291
variables than go to the
pub
I quiver with excitement
when thinking about
.748 .338 .089 .279 .079
designing my next
experiment
I enjoy sitting in the park
contemplating whether to
.714 .291 .172 .281 .046
use participant observation
in my next experiment
Designing experiments is
.593 .371 .158 .126 .021
fun
I like control conditions .587 .383 .362 .234 .076
I tried to build myself a
time machine so that I
could go back to the 1930s
and follow Fisher around .130 .744 .239 .034 .143
on my hands and knees
licking the floor on which
he'd just trodden
I memorize probability
values for the F- .145 .666 .355 .035 -.001
distribution
I worship at the shrine of
.068 .619 .306 .321 .071
Pearson
I once woke up in the
middle of a vegetable
patch hugging a turnip that
.409 .545 -.085 .351 .227
I'd mistakenly dug up
thinking it was Roy's
largest root
Thinking about Bonferroni
corrections gives me a .505 .523 .194 .018 .010
tingly feeling in my groin
I soil my pants with
excitement at the mere .447 .509 -.133 .025 .186
mention of Factor Analysis
I calculate 3 ANOVAs in
my head before getting out .432 .470 .053 .447 .165
of bed every morning
I like to help students -.014 -.007 .712 -.120 -.121
I love teaching .170 .121 .688 .041 .178
Passing on knowledge is
the greatest gift you can .104 .158 .649 .129 .066
bestow an individual
I love teaching because
students have to pretend
.168 .213 .613 .011 .411
to like me or they'll get bad
marks
I spend lots of time helping
.407 .128 .568 -.231 .202
students
Helping others to
understand Sums of .199 .399 .535 .004 -.038
Squares is a great feeling
I like it when people tell
me I've helped them to .386 .345 .435 -.005 .286
understand factor rotation
Teaching others makes me
want to swallow a large
bottle of bleach because
.082 .027 .011 .781 -.054
the pain of my burning
oesophagus would be light
relief in comparison
If I had a big gun I'd shoot
all the students I have to .085 .137 -.040 .765 .107
teach
Standing in front of 300
people in no way makes
.220 .099 -.109 .586 .316
me lose control of my
bowels
I could spend all day
explaining statistics to .440 .037 .117 .450 .077
people
My cat is my only friend .129 .231 .115 .030 .673
I often spend my spare
time talking to the
.024 .222 .028 .292 .672
pigeons ... and even they
die of boredom
I still live with my mother
and have little personal .250 .072 -.012 .183 .666
hygiene
People fall asleep as soon
as I open my mouth to .028 -.132 .135 -.066 .506
speak
Extraction Method: Principal Component Analysis.
Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 7 iterations.

Data Interpretation
In the following section we will take a look at each of the tables shown in the previous section.

Correlation Matrix: The table gives us the Pearson correlation coefficient between all
pairs of questions. We know that to do a factor analysis we need to have variables that
correlate fairly well, but not perfectly. Also, any variables that correlate with no others
should be eliminated. Therefore, we can use this correlation matrix to check the pattern
of relationships. First, scan the matrix for correlations greater than .3, and then look for
variables that only have a small number of correlations greater than this value. Then scan
the correlation coefficients themselves and look for any greater than 0.9. If any are found
then you should be aware that a problem could arise because of multicollinearity in the
data. In this case we can see that all the values are within range
1 KMO & Bartletts Test: KMO test is used to determine whether the data set obtained is
good enough to perform a factor analysis or not. As per the thumb rule a bare minimum
value of 0.5 and that values between 0.5 and 0.7 are mediocre, values between 0.7 and
0.8 are good, values between 0.8 and 0.9 are great and values above 0.9 are superb For
these data the value is 0.894, which falls into the range of being great, so we should be
confident that the sample size is adequate for factor analysis. Bartletts measure tests the
null hypothesis that the original correlation matrix is an identity matrix. For factor
analysis to work we need some relationships between variables and if the R-matrix were
an identity matrix then all correlation coefficients would be zero. Therefore, we want this
test to be significant (i.e. have a significance value less than .05). A significant test tells
us that the R-matrix is not an identity matrix; therefore, there are some relationships
between the variables we hope to include in the analysis. For these data, Bartletts test is
highly significant (p < .001), and therefore factor analysis is appropriate.

Factor Extraction Table: This particular table lists the eigenvalues associated with each
linear component (factor) before extraction, after extraction and after rotation. Before
extraction, SPSS has identified 28 linear components within the data set (we know that
there should be as many eigenvectors as there are variables and so there will be as many
factors as variables). The eigenvalues associated with each factor represent the variance
explained by that particular linear component and SPSS also displays the eigenvalue in
terms of the percentage of variance explained (so, factor 1 explains 32.373% of total
variance). It should be clear that the first few factors explain relatively large amounts of
variance (especially factor 1) whereas subsequent factors explain only small amounts of
variance. SPSS then extracts all factors with eigenvalues greater than 1, which leaves us
with four factors.

2 Table of Communalities: The communalities in the column labeled Extraction reflect the
common variance in the data structure. So, for example, we can say that 64.6% of the
variance associated with question 1 is common, or shared variance. Another way to look
at these communalities is in terms of the proportion of variance explained by the
underlying factors. After extraction some of the factors are discarded and so some
information is lost. The amount of variance in each variable that can be explained by the
retained factors is represented by the communalities after extraction.

3 Rotated Component Matrix: This table shows the rotated component matrix (also called
the rotated factor matrix in factor analysis) which is a matrix of the factor loadings for
each variable onto each factor. Basically, it tells you how each variable is related to the
factors obtained.
Conclusion
What we can conclude from the above tables is that the data was good enough to run the factor
analysis and that the variables are fairly correlated with each other to conduct the analysis. The
rotated matrix tells us the loadings of each factor on the variables and hence these loadings can
be used to find the underlying constructs in the real world. The questions that load highly on
factor 1 seem to all relate to love for statistics. The questions that load highly on factor 2 all
seem to relate to different aspects of creativity; therefore, we might label this factor love for
experimental design. The three questions that load highly on factor 3 all seem to relate to
teaching; therefore, we might label this factor love for teaching. Finally, the questions that load
highly on factor 4 all contain some component of interpersonal skills; therefore, we might label
this factor for good interpersonal skills. Last factor opting for emotional stability.

Managerial Implications
As a researcher, we can take the above mentioned data & results and use them to find better and
more effective ways to trecruit a lecturer for statistics. If what we concluded above was true then
we need to realize that the reason why some lecturers are unfit is not because of the lack of
qualification but there could be other underlying factors like discussed above which are
preventing the board to not recruit lecturers.

S-ar putea să vă placă și