Sunteți pe pagina 1din 14

Chapter 3

Research Methodology

3.0 Introduction

This chapter underlies the theories of how research should be carried out. It includes
research design which is vital for the entire study; ways to collect data; sampling strategy that
shows the method of selecting target audience; research instrument will be conducted in order to
create survey among the targeted population; the analysis of data will be addressed; and lastly
will be the summary of the chapter.

3.1 Research Design

This is the most crucial planning stage which comprises the entire planning of the study, such as
the approaches utilized, collecting steps, analyzing methods and determining the mandatory
information (Zikmund, W. G., Babin, B. J., Carr, J. C., & Griffin, M., 2009). In addition to this, it
represents the scheme for the gathering and detailed examination of the data to answer research
question and fulfill research objectives by providing reasonable ground to choose the sources of
data, ways of collection and analysis technique (Mark.S, Philip.L, Adrian.T, 2012).
This research is belonged to quantitative research which is to recapitulate results from sample to
the targeted population by quantifying data (Babbie, Earl R., 2009). Moreover, it is employed to
examine the variables so as to determine to factors that would influence UCSI University’
Readiness Towards Sustainable Campus. line with the research, descriptive research design is
adopted. As illustration, descriptive research is a study of research and is widely used in
behavioral sciences. It emphasizes that issues can be solved and improves practices by
observation, analysis, and description (Eunsook T. Koh, Willis L. Owen, 2000). Besides, it is
undertaken by the researcher to get the answers of the 6Ws which are: who, what, when, where,
why and how for the questions of the research (Verónica Rosendo Ríos, E. P., 2013).
3.2 Population and Sampling Design

3.2.1 Population

Population refers to a precise collection of individuals which have similar characteristics and is
defined through the sampling criteria enacted by researcher. In this study, accessible population
which is the extent of the part of the population to which the researcher has reasonable access
will be addressed. The selected population in this research is millennials who reside in UCSI
student and staff.

3.2.2 Sample Design

3.2.2.1 Sampling Frame and Sampling Location

A sampling frame is a list that used to specify population of interest (Stephanie, 2014). In this
research, the sampling frame was UCSI University. This was used in order to assure that the
sampling frame is up to date, complete and corresponding for the attainment of the objective
(Bikokwah, 2016).

3.2.2.2 Sampling Elements

The study will be carried out through the distribution of questionnaire via online channel and the
targeted audiences will be UCSI students and staff, The reason for choosing ucsi students and
staff is because they know more about ucsi's campus than others.Thus, they will help to conduct
a more precise result in the final examination of the research by a greater understanding of
different acquisition style and have the basic knowledge on the way to answer the questions
stated in the survey instrument.
3.2.2.3 Sampling Technique

Non-probability sampling technique which the sample is selected based on the subjective
judgement rather than random selection is employed in this study. The reason of choosing this
sampling technique is it is cost and time effective as compared with probability sampling. In
addition to this, convenience sampling is selected among the different types of non-probability
sampling as correspond with its name, the researcher can collect the sample from somewhere
which is convenient to them such as friends , University students, university staff and so on.

3.2.2.4 Sampling Size

Sampling size is the count of individual sample in statistical setting like a survey (Zamboni,
2018). In selecting the sample size, the researchers have to take in consideration of whether the
research is from statistical or non-statistical aspect and it is always a tricky and complicated to
determine. In line with Roscoe’s rules of thumb theory, the sample size less than 10 is not
recommended in statistical analysis and a sample size that ranged between 30-500 is more
suggested due to the occurrence of rare justification in behavioral research. There are four
aspects to be considered in calculating the sample size which are population size, margin of error
(confidence interval), confidence level and standard of deviation. As illustration, there is a simple
formula for it: (Z-score)2*StandardDeviation*(1-StandardDeviation) / (margin of error)2. As
illustrations, the Z-score will be 1.645 based on 90% confidence level; the standard deviation is
0.5 based on a safe decision which this figure is the most forgiving number and ensures that the
sample will be large enough; the margin of error is +/- 5%. The result of the calculation is 378
and this number is more suggested according to Roscoe’s rules of thumb, thus 378 sample size is
selected to conduct the research.
3.3 Data Collection Methods

This research is gathering two types of data for analysis which are primary data and
secondary data. According to Hox and Boeije (2005) stated that secondary data is data which
already develop by others for several objectives and reused it to solve the present research
questions while primary data is data which new collected data that sue for the unique research
objectives.

3.3.1 Primary Data

Primary data was used because it provides the latest information and related information useful
to solve current research problems in this research. Questionnaires are the only way used to
gather the primary data in this research. The questionnaires consist of closed ended questions
align with the research’s purpose. A five point Likert scale was used for close-ended questions to
collect the six independence variables and one dependent variable data for tested purpose. The
questionnaire contained two parts each. The first part tries to determine the demographics of the
target audiences whereas the second part tries to determine the opinions of the target audiences
on the variables to test against the assumptions put The reason why primary data is important as
well because it can provide the up-to-date, reliable, and related opinions from present
respondents even though the cost of collecting primary data is high and spend a lot of time for
researchers compared to secondary data. All collected statistics will be analyzed in Chapter 4 by
using SSPS ,to get the result to make a conclusion in Chapter 5.
3.3.2 Secondary Data

The secondary data is more convenience, cheaper and easier to gather than the primary data. That
is why the researchers collected up-to-date and relevant information on the research topics. The
data collected by this study basically include several sources such as related books from the
library, online resources, and journal articles on online databases. In order to solve current
research problems secondary data can be used as a basic source of insight for obtaining research
topics. Researchers conduct preliminary studies by referring some journals close to the research
topic. Then, make assumptions based on the studies that have been discussed before. However,
these assumptions were later tested after the questionnaire was collected.
3.4 Research Instrument

Establish a survey questionnaire with the literature review and the purpose to check the
relation of independence variables which are Setting and Infrastructure, Energy and Climate
Change, Waste, Water,Transportation , Education factors influence UCSI University’
Readiness Towards Sustainable Campus.This study uses a self-administered questionnaire as the
research survey instrument. The questionnaire was distributed online because the researchers
wanted to ensure that all interviewees had access to the Internet. This is an important condition
for UCSI University’ Readiness Towards Sustainable CampusIn addition, since the self-
administration survey was filled out by the respondents themselves, the investigator bias was
eliminated without the assistance of the interviewer. The prejudice of the interviewer occurs
when the likelihood of the visitor's existence influences the respondent to provide an untrue
answer (Zikmund et al., 2010). In addition, maintaining the anonymity of respondents allows
researchers to get a true answer.
3.4.1 Questionnaire Design

Closed-ended questions were included in the survey questionnaires. It provides interviewee-


specific limited options and requires them to select the most appropriate answer based on their
own opinions (Zikmund et al., 2010). In addition, this method requires less interview skills, and
respondents are more likely to answer. Moreover, the standardization of alternative responses
allows researchers to easily analyze data because it may limit unexpected responses. Besides,
simple English is used to ensure that respondents fully understand the problem respondents had
to answer. In this research, questionnaires were separate into 3 sections which Section A, Section
B, Section C .Section A asks respondents to answer questions about their social demographic
background such as gender and age (for reference only). Section B asks respondents to answer
questions aboutUCSI University's readiness for a sustainable campus..Section C of the
questionnaire asks respondents to answer questions about the factors of UCSI University's
preparation for a sustainable campus. The factors which are the six independent variables of
Setting and Infrastructure, Energy and Climate Change, Waste, Water,Transportation , Education.
3.4.2 Pilot Test

Pilot test also call as pre-test. According to Malhotra (2007) pre-test is making a chance to
review the data collection instrument to make sure that asking a right question, necessary
information will be collected and appropriate data collected method. 30 samples will be used for
pre-tested before the complete questionnaires were distributed in this study. The pilot testing is
carrying out to avoid errors and mistakes in the actual survey questionnaires. There are many
advantages of the carryings pre-test such as determining errors in the survey tools, validating
research protocols, investigate the survey tools and verifying the suitability of proposed methods
(Baker, 1994). Furthermore, pre-test let interviewers get the feedback from interviewee to make
sure all questions in the questionnaire are simple and easy to understand.

3.5 Construct Measurement

This study consists of the measurements of six independents variables which are Setting and
Infrastructure, Energy and Climate Change, Waste, Water, Transportation ,Education is factors
influencing UCSI University’ Readiness Towards Sustainable Campus . was used to measure
those variables and respondents need to answer questions based on their own opinions.

3.5.1 Scale of Measurement

According to the study the interval scale is a scale containing features of nominal and
ordinal scale. However, it allows researchers to understand the difference in data between
variables. The purpose of the scale is to answer people's attitudes by asking them to answer a
series of statements relating to issues that people agree with (Leckett, 1932). A nominal scale is a
scale that assigns values to specific variables. However, nominal scale only used for
identification purpose but not create any value for researchers (Zikmund et al., 2010). According
to Stevens (1946), ordinal scale occurs when there is orderly behavior. Researchers cannot know
the exact value and differences between data. They can only understand roughly the difference
between each point.
For Section A, the questionnaires consist of numbers of questions which used nominal and
ordinal scale measurement to measure the respondents’ social demographic background.
Nominal scale gets to know their gender, education level and profession.

For Section B, questionnaire involve the questions about UCSI University’ Readiness
Towards Sustainable Campus 。

For Section C, the questionnaires involved the factors influencing setting and Infrastructure,
Energy and Climate Change, Waste, Water,Transportation and,Education。

Strongly Disagree Disagree Neutral Agree Strongly Agree


1 2 3 4 5

3.6 Data Screening

3.6.1 Missing Data

There will be problem(s) if there is missing data occurs during the research process and one
of the possible problem is the existing data points aren’t enough to conduct the analyses and thus
there won’t be precise result of the analyses. In addition to this, there might be bias issues
according to the missing data such as the respondents may leave the question(s) blank because of
they are not willing to answer it or they just missed it. As instance, there is a question asking
about the marital status and male respondents are less possibility to tell their status than female
respondents, then there will be a female-biased data. Thus, if the research enrolled marital status
in casual models, then the research is heavily biased toward females.

In order to handle missing data, the best way is to design the research well and gather the
data carefully from the respondents. However, if the researcher missed from doing so, there are
several ways for the researcher to cope with it: firstly, gathering of data from those who are
participating in the research should be limited; second, researcher should carry out
documentation of the research as detailed as possible; third, researcher should lecture all related
personnel on every area of the research before the enrollment of participants; forth, a priori
targets for the undesirable level of missing data should be set; fifth, researcher should identify
clearly that which participants are most likely or have greater risk of being missed; lastly, the
reason(s) of giving up to answer the questions should be recorded (Kang, 2013).

3.6.2 Outliers

3.6.2.1 Types of Outlier

3.6.2.1.1 Multivariate Outliers

A multivariate outlier is a combination of unusual scores on at least two


variables. For instance, if all but one respondent reposts that exercise brings a good
impact on weight loss but this one respondent argues that his weight is increased
when he exercises, then the data gained from him can be considered a multivariate
outlier. In addition to this, researcher needs to compute the Mahalanobis d-squared in
order to discover influential multivariate outliers.

3.6.3 Normality

Normality test is a statistical process that used to judge whether a sample or any group of
data is adapted to a standard normal distribution and can be performed mathematically or
graphically. It can be assessed in different ways: shape, skewness and kurtosis.
First, researcher needs to create a histogram and then plot the normal curve in SPSS in order
to know the shape of the distribution. There are normality problems if the normal curve doesn’t
adapt to the histogram. Besides, the researcher can also determine the normality by observing the
boxplot.

Moreover, skewness refers to the responses that didn’t get in a normal distribution but were
heavily weighted toward one end of the scale. In order to set up skewness, researcher may need
to transform the data (if continuous) or eliminate influential outliers.

Third, kurtosis refers to the estimation of whether the data is heavy-tailed or light-tailed in
line with a normal distribution. For example, the data is heavy-tailed (outliered) if the data sets
with high kurtosis value and in contrast, the data is light-tailed (lack of outlier) if the data sets
with low kurtosis value.

In addition to this, the researcher may face an issue with the data distribution is a bimodal
distribution. Bimodal shows that there are multiple peaks in the data rather than peaking at the
mean and this is another problem that the researcher may get caught in with the data distribution.

In order to deal with extremely non-normal data, researcher can transform the data before
involving them in the model if they have non-Likert-scale variables such as age).

3.6.4 Linearity

Linearity represents the consistent slope of change that shows the relationship between an
independent variable (IV) and a dependent variable (DV). The easiest way to test linearity is
through the utilization of the ANOVA test in SPSS for the deviation from linearity test. As
illustration, there is no linear relationship between IV and DV if the Sig value for the test is less
than 0.05 which also shows that it is problematic. However, there are two ways to settle the
linearity issues which are transforming the data or removing the outliers.

3.6.5 Homoscedasticity

Homoscedasticity refers to the error displays consistent variance across different levels of
the variable. A simple scatter plot can classify that whether the relationship is homoscedastic or
not, for instance, homoscedasticity is presented if there is a consistent pattern from the plot. In
line with this, if the relationship isn’t homoscedasticity, heteroskedastic relationship is appeared.
In order to cope with this situation, there are two methods: dividing the data via subgroups or
transforming the data.
3.6.6 Multi-collinearity

Multi-collinearity explains the scene where the variance of IV shows in DV is overlapping


with each other thus not each explaining unique variance in the DV. In order to check it,
multivariate regression is required to be conducted and then Variable Inflation Factor (VIF) need
to be computed for each IV. Here comes with the rules of thumb for VIF:

- If VIF is less than 3, then there is not a problem.


- If VIF is more than 3. Then there is a potential problem.
- If VIF is more than 5, then there is very likely problem.
- If VIF is more than 10, then there is definitely a problem.

Moreover, if the tolerance value in SPSS is less than 0.10 then there are strong signs of
multi-collinearity issues. Besides, the action of eliminating one of problematic variables can use
to deal with multi-collinearity issues.
3.7 Data Analysis

This research will use Statistical Package for Social Sciences (SPSS) to process, summarize
and analyze the data gather from questionnaires. Based on Zikmund (2003), the data analysis
phase includes interrelated programs that will be used to aggregate and convert raw data into
meaningful information. At the same time, data analysis is performed to generate information
that helps solve research problems and assumptions (Malhotra, 2007). Therefore, descriptive
analysis and inference analysis will be used to analyze the raw data gather from questionnaires.
In addition, the results were assessed and explain to solve research problems.

3.7.1 Descriptive Analysis

Descriptive analysis can be defined as the analysis and interpretation of raw data into a way
that the researchers can easily understand. (Zikmund, Babin, Carr, and Griffin, 2010).
Descriptive analysis also provides statistical information about the subjects that are being
studied. It includes frequency distributions, central tendency measurements (averages, patterns,
and medians), dispersion measures (range, standard deviation, and coefficient of variation), and
shape measurements (skew and kurtosis).

3.7.1.1 Frequency Distribution

According to Zikmund, Babin, Carr, and Griffin (2010), frequency distribution is a set of data
organized by summarizing the number of instances a particular value of a variable occurs.
Categorical variable used in the measurement of frequency distribution is nominal scale or
ordinal scale (Zikmund, 2003). In this study, frequency analysis was used in both Section A and
Section B of the survey questionnaires. Frequencies are generally obtained from nominal
variables such as gender, marital status, ethnicity, education level and profession. In addition, it
is derived from ordinal variables such as age, monthly income, frequency of use of UCSI
University’ Readiness Towards Sustainable Campus . Therefore, when frequency is divided for a
variable, a frequency count, percentage, and cumulative percentage table are generated for all
values associated with that variable (Malhotra and Peterson, 2006).

3.7.2 Inferential Analysis

Inferential analysis uses statistics to draw conclusions about the characteristics of the entire
population based on the information in the data provided by the sample (Burns and Bush, 2006).
In this study, the following analysis was performed using SPSS:

(i) Pearson‘s Correlation Coefficient Analysis


(ii) Multiple Regression Analysis
3.7.2.1 Pearson’s Correlation Coefficient

Pearson’s Correlation Coefficient is used to test the relationship between factors influence
UCSI University’ Readiness Towards Sustainable Campus。 Pearson‘s Correlation
Coefficient is used to identify the relationship between variables and the two-tailed significant
level is used to test the null hypotheses. Futhermore, the correlation coefficient (r) is in the range
of -1.0 to +1.0. The sign of (+ or -) in correlation coefficient mean the path of relationship
whereas the figures mean the intensity of the relationship stated in the study of Coakes and Steed
(2010). There is perfect positive linear relationship when get the value of +1.0 whereas there is
perfect negative linear relationship when get the value of -1.0 between tested variables. It is
mean there is no llinear relationship between tested variables when get the value of 0. Based on
the study of Hair, Money, Samouel and Page (2007) show that the higher the correlation
coefficient, the stronger the correlation between variables.

3.7.2.2 Multiple Regression Analysis

Multiple Regression Analysis was used to analyze the variance in the interval dependent variable
setting and Infrastructure, Energy and Climate Change, Waste, Water,Transportation ,
Education。(Multiple Regression Analysis which used at the same time to check and evaluate
complicated causal relationship among variables (Williams, Vanderberg & Edwards, 2009).
Moreover, it is suitable used for this study since all the independent variables and dependent
variable can be measured by using the same interval scale. Besides, by using multiple regression
analysis, you can more clearly and better understand which structure has a stronger impact on the
dependent variable.
3.8 Reliability and Validity

3.8.1 Reliability

Reliability refers to measuring the stability and consistency of a concept. Reliability is


especially a problem related to quantitative research (Bryman & Bell, 2011). Reliability testing is
designed to test the internal consistency of variables, regardless of whether the selected
indicators are reliable to measure latent variables. A construct with a desired α value above 0.7 is
considered reliable (Pallant, 2001). However, alpha values around 0.5 or greater are considered
acceptable for this study.

3.8.2 Validity

In the study of Fisher (2007) define that validity refers to measuring the actual content to be
measured in a practical way. Since most of the research questions were taken from previous
literature on UCSI University’ Readiness Towards Sustainable Campusthese documents believe
that the validity has been automatically confirmed. By matching between research questions,
content and subject areas can help researchers to handle content validity in the study (The
College Board, 2014). Content validity is also established through pre-testing with
questionnaires. There are also 30 students who have conducted pilot tests in this study. Papers
have been reviewed by consultants and peer students can helps to avoid unrelated to the analysis
by making changes before study. “Construct validity refers to how the research results support
the theory behind the study, and whether the theory supported by the results provides a final
interpretation of results (Graziano & Raulin, 2010 and Bryman & Bell, 2011).
3.9 Chapter Summary

The chapter summarize the methodology that used to carry out the research which are
research design for data collecting and analysis purpose. Moreover, there are illustrations of data
collection methods and research instrument. Further, the construct measurement section helps to
analyze the possible questions that will appear in the survey instrument to the target respondents
and are tested by assessing reliability and validity test.

S-ar putea să vă placă și