Sunteți pe pagina 1din 35

CHAPTER 2: METHODOLOGY

PhD students are expected to make an original knowledge contribution, and need to
justify their research design to convince their audiences including examiners that such
contribution is reliable. This should involve not only the technique depiction in
obtaining and analyzing data, but also the understanding of knowledge and how it is
generated/developed.

Hathaway (1995) criticizes that researchers generally make the method choice
decision with relative ease, choosing the method that will provide the needed
information, without giving much thought to the assumptions underlying research
methods. In our Construction Management discipline, this criticism is pertinent since
many doctoral dissertations misuse this methodology chapter as a place to merely
describe how they get data. Smyth and Morris (2007) contend that most authors fail to
make explicit their theoretical, epistemological or methodological positions. Rooke et
al. (1997) express their objection to incompetent researchers getting away with a
mechanistic application of formal procedure.

The purpose of this chapter is not to provide a superficial discussion of methodologies


and paradigms to achieve an arbitrary or monopolistic assertion, such as my
method is better than yours (Martin, 1990). It is a methodological review and
discussion, to demonstrate that the author not only knows what can be done as one
choice in doing this research, but also understands why to do so. The better
understanding of merits and weakness of the methods/approaches will help to make
the theoretical statements with greater confidence and to identify the limitations more
honestly.

In order not to be trapped in the inexhaustible and obscure philosophical discussions

in the general context of research, this chapter will focus on discussions in our
discipline, arising from heated methodological debates, and refer to a more general
context if and when needed. It will 1) review the most popular research approach
adopted by construction management scholars; 2) review the methodological debates
in the construction management discipline in the recent decade; 3) contribute to this
discussion, with an emphasis on the relationship between data and knowledge
generation; 4) report the difficulties in doing this research; and 5) justify the basic
research design in this research.

2.1

RESEARCH

MANAGEMENT

IN

THE

DISCIPLINE:

CONSTRUCTION
SUMMARY

AND

CRITIQUE
2.1.1 Reliance on original data
Many scholars have reviewed papers in the research community of construction
management and economics, and investigated the paradigm and trend (e.g. Betts and
Lansley, 1993; Dainty, 2007; Hua, 2008; Pietroforte and Stefani, 2004). Generally,
they reviewed papers published in leading journals in the discipline, such as
Construction Management and Economics and the American Society of Civil
Engineers (ASCE) Journal of Construction Engineering and Management, which are
identified as two major academic journals that have been ranked first and second in an
international survey of construction management journals (Chau, 1997), or other
influential journals like International Journal of Project Management (e.g. Smyth and
Morris, 2007).

Their findings include what we do and how we do this in research. Researchers


investigate what we have done to identify and report a temporal profile and trend. For
example, after a review to papers in ASCE Journal of Construction Engineering and
Management published from 1983 to 2000, Pietroforte and Stefani (2004) conclude
that traditional construction engineering topics have been complemented by an
increasing interest in construction management topics, such as management of firms,
project delivery systems, project performance evaluation and project quality planning.

The investigation of what we have done is mainly an identification and report.


However, the investigation on how we do research mobilizes strict self-criticism and
then heated debates on how best we should do research within the discipline.

Philips and Pugh (1994) state that a qualified PhD work needs to say something useful
and novel that the research community wishes to hear. It seems that generating and
using original data will make it easier to develop original conclusions, although it is
not necessarily a prerequisite (Hughes, 1994). The approach of relying on original
data to announce findings and make conclusions in the construction management
research community was confirmed by Betts and Lansley (1993). They reviewed
papers published in the first ten years of the journal Construction Management and
Economics, i.e. from 1983 to 1992. They found that 1) seventy per cent of the papers
are based on original or nearly original data; 2) the basis for the papers draws almost
equally from reviews, case studies and empirical work.

Hughes (1997) also indicated some researchers seek to discover solutions to research
questions by asking practitioners what they do, with a view to codifying and
representing best practice. A popular topic is to establish why things go wrong on
construction projects by asking construction managers what they think the reasons are
(Runeson, 1997). Yung and Yip (2009) found that researchers often quantify variables
through informants responses, and then construct relationships among variables to

make conclusions. A literature review (Yung and Yip, 2009) shows the dominance of
opinion surveys as the means of data collection: out of 35 research exercises on
construction quality, 28 obtained information through questionnaire surveys, and 2
through case studies, while another 5 did not use data.

Indeed, original data plays an important role in construction management research. It


is quite common to witness a construction management researcher asserts some
findings based on interviews, questionnaires, Delphi surveys and case studies, through
an inductive approach in codifying variables and constructing cause-effect
relationships among them. However, this approach draws many criticisms, focusing
on: 1) lack of theory; 2) validity of the data; 3) the reproducibility of the results; and
4) the limits of generalization.

2.1.2 Lack of theory


It was criticized that there is a lack of a coherent theory and a theoretical framework
in the fields of construction management and economics (e.g. Ofori, 1993; 1994). For
example, Kanfandaris (1980) criticized that preliminary hypotheses (in the sub-topic
of building growth/firm/process) cannot possibly be elevated to an overall theory.
However, it has been agreed recently that construction management and economics is
an applied discipline which covers a wide range of activities, so it is over-ambitious to
expect a single theory to underpin this discipline (e.g. Chau et al., 1998; Seymour et
al. 1998). Edwards (1997) pointed that we do not see similar calls for the theory of
textile industry management or automotive industry management.

However, many researchers also criticize that the current problem is not the lack of a
leading theory for the whole discipline, which is over-ambitious as indicated above,
but no theory at all, despite some assertions in most of the research.
Betts and Lansley (1993), based on their review, claim that construction management
4

research is rather inward-looking, self-referential and lacking its guidance from and
contribution to theories. Runeson (1997) criticizes that the researchers appear to be
unaware of existing theories and there is no science at all, no theory so that any
selection of variables that shows a statistical correlation or can be fitted into a
regression model seems sufficient for a paper. Harriss (1998) criticizes that
construction management researchers have for too long ignored the centrality of
theory to human activity, and argues that a research without theory is not research.

Another kind criticism is that the so-called/claimed construction management theories


are isolated, or disconnected to those in a wider context. Hughes (1997) argues that
many claimed theories in construction management research are so special and thus
lack a close relationship with basic theories from mainstream disciplines. He argues
that construction management should not be deemed as an academic discipline in its
own right, with its own research techniques and theories (Hughes, 2001); but a source
of problems and data, whereas solutions and approaches need to be based within
established academic disciplines (Hughes, 1994). This position for the field of
construction management as a topic area, or an applied discipline, is also advocated
by Seymour et al. (1997; 1998) and Runeson (1997), although they have totally
different opinions on how we should do research in this field.

However, it seems construction management researchers do not converge on the


meaning of theory. They did not make clear what constitutes theoretical contribution
when they criticized that construction management research lacks theories. Thus,
Seymour et al. (1997; 1998) requested a consideration of what might constitute
theories. This will be discussed later.

2.1.3 Subjective bias


As indicated above, construction management researchers rely heavily on original
5

data to make their conclusions. These original data are always from interviews,
questionnaire surveys, project-based case studies, and Delphi surveys, etc, which are
conducted by and only by researchers themselves. This raises a concern about
subjective bias.

The basic logic of the above approach is that honest answers to well-designed
questions from an appropriate sample can reflect the situation of the whole population
under research. Thus, a conclusion from questionnaires/interviews/case studies can be
generalized to the population. Many textbooks have taught how to get objective and
representative responses through careful design, such as different measures of
sampling and subtle design of questionnaires (e.g. Bradburn, 2004; Fellows and Liu,
2002; Gillham, 2000), complemented by complex statistical analysis (Walonick,
2003). But it cannot be denied that the nature of this kind of data is soft.

It is difficult to give accurate definitions to hard data and soft data. As subjectivity
cannot be totally excluded from research processes, no (or little) data is 100 percent
hard. Even if the data source is absolutely hard (objective), the decision to use that
specific set of data for the research is still made by (subjective) researchers. However,
soft data are more vulnerable to subjective influences. When comparing two data
sources, such as national statistical data and a Delphi survey, it is not difficult to
identify which one is more inclined to be soft. This refers to the vulnerability of the
data collection process to be influenced by human factors, but does not mean that
every set of hard data is more reliable than every soft data set. For example, one case
of questionable hard data is that the sum of all regional GDPs in Mainland China, as
issued by each local Government, is always larger than that of the whole country,
issued by the national statistics bureau.

Seymour et al. (1997) criticize that the data, which are often treated with explicit
mathematical analyses, have already been subjected to a sophisticated and

unexamined process of preparation before the reported research commences. Chau et


al. (1998) think that they appear to be referring to data which arise from informants
and respondents, presumably by questionnaire, and suggest using hard data.

In applying the above data collection methods, the research can be seen as a
communication process between researchers and respondents who share information
over time to converge upon a mutual understanding (Ruesch and Bateson, 1968). It
seems to be a two-way, non-linear interaction, rather than a one-way, one-off flow of
information (Loosemore, 1999; Loosemore and Tan, 2000). The benefit of this nonlinear communication includes greater convergence, flexibility, emotion transmitting,
personal relationships and silent language, etc (Loosemore, 1999); however, the
subjectivity seems unavoidable and the validity of data becomes questionable.

The problem can arise from both respondents and researchers. First, a potential for
bias can arise from peoples association with a particular group (Loosemore and Tan,
2000). Lawson (1979) indicated that different methods and explanations can be
developed for the same phenomenon by different groups, say architects and engineers.
Occupations can also influence the way to investigate and solve problems (Cann,
1990; Pierre et al., 1996). Further, distinct cultures, which are developed from
different occupations, produce different mind-sets and ways of seeing and interpreting
the world (e.g. Manis, 1996). The construction industry involves distinct occupational
groups, such as clients, designers, architects/engineers, contractors, sub-contractors,
and material suppliers. This occupational difference constitutes a strong source of
cultural differentiation (Bowley, 1966; Bennett and Wittaker, 1994; Munns, 1996).

Considering the traditional adversarial attitudes among stakeholders in the


construction industry, who is answering questions must influence the answers. This
becomes especially pertinent to topics where different benefits accrue to different
groups. For example, in Lathams (1993) intermediate reports, it is found that

opinions from different groups are substantially different, even opposite; and Rahman
(2003) found that clients and contractors have contrary opinions on the desired
allocation of risks.

Furthermore, there is no guarantee that informants will provide honest information.


Behavioural research has pointed out that what is told may differ from what is done,
from both organizational and personal perspectives.

Thus, in a topic involving conflicting interests among different groups, the validity of
this kind of data is, at least to some extent, questionable. On the other hand, in a topic
of how to do things better where common goals are shared by different parties,
responses from different groups may reveal something constructive. Whether the data
collected is reliable is thus highly influenced by the research topic. In this sense,
Hughes (1997) pointed that asking practitioners what they do cannot produce counterintuitive, unexpected ideas, upon which our understanding of construction
management may depend.

The other source of potential detriment to the validity of data comes from researchers.
This may involve the choice of respondents, the selection and filtration (intended or
not) of information, and the quality of questions soliciting data.

As discussed above, the answer to some questions substantially depends on who


replies, especially on topics that conflicting interests exist among different groups. So
the choice of informants inevitably influences the results of research. It is common to
witness CM researchers sending questionnaires to some specific number of clients,
consultants, contractors, sub-contractors and others stakeholders. However, it is not so
common to see CM research explains how that specific number and proportion are
decided. Further, in a topic involving conflicting, even inverse interests, it is not very
meaningful to indicate, through a sophisticated procedure in collecting data and the
subsequent complex statistical analysis, that different groups significantly and
8

substantially disagree with each other.

Also, in much CM research, questionnaires are sent to informants both known (always
recognized in seminars, conferences or other occasions so a common interest or even
opinion can be expected) and unknown (always randomly selected from a name-list of
a construction industry organization) to researchers. It is not unreasonable to expect
higher response rates from people who are known to the researcher. But little CM
research reports this difference and the consequential potential bias, and analyzes the
information from these two sources separately.

Furthermore, the information accepted by the researcher may be a result of filtration


of information she receives. People tend to exercise selectivity by filtering out
information which does not confirm to their existing mind-set and expectations
(Ruesch and Bateson, 1968; Lord et al. 1979; Sutherland 1993). Thus both informants
and researchers may ignore non-conforming signals, re-organise them or misinterpret
them in a way which confirms and strengthens existing beliefs (Loosemore and Tan,
2000).

Another problem of the soft data is that its validity depends on the subjective
judgement of different researchers. Two recent cases are Phua and Rowlinson (2004),
Zhang and Liu (2006) who relied upon questionnaire surveys to analyze national
cultural differences in the construction industry. However, Rooke and Kagioglou
(2007) criticize this approach and allege that they designed the questionnaires without
direct knowledge of the activity under study, so the questionnaires in the above two
research exercises are allegedly irrelevant, misleading or meaningless. Rooke and
Kagioglow (2007) suggest a Unique Adequacy requirement (UA) as a means of
evaluating research. This comprises 1) a weak form which demands that the
researcher is competent in the research setting, and 2) a strong form that demands that
research reports use only concepts originating within the research setting. It seems

their suggestions are presented in the context of a qualitative approach rather than a
quantitative approach.

2.1.4 Replication difficulty


Generally, the research results need to be replicable. If a chemist announces that she
gets a new compound C by adding A to B, then this experiment should be repeatable
by others. In behavioural research, even if the bias cannot be eliminated, it can be
controlled if the data can be obtained repeatedly (Rosnow and Rosenthal, 1997). It
seems that construction management researchers agree that the research results should
be verified by replicable evidence. For example, Hughes (1997) agrees that science
proceeds on a basis of replicability; Chau et al. (1998) suggest useful knowledge
should be replicable/testable/refutable; Fenn (1997) also suggests that methodology
(in fact methods) should be described in research so that others may replicate.

However, it seems that much construction management research cannot pass this test
of replicability. Also, this failure is closely related to the repeat difficulty of the data.
Langford (2009) confesses that many of the studies undertaken in the construction
management discipline are not subject to replication. Loosemore and Tan (2000) note
that replication is not a discernable habit of the construction management research
community, and attribute this to the methods used being not easy to replicate. Harriss
(1998) criticizes that much construction management research proclaims knowledge
through survey methods, but there is no guarantee that the next observation will not
differ from the former.

It is not uncommon to witness a questionnaire was sent in a specific study location


say Australia, and ten key factors were found for the topic under investigation; while
another questionnaire, with similar questions investigating the same issue, was then
sent in another place say Hong Kong in another research, and ten key factors in Hong
10

Kong were found. The purpose is to compare, rather than replicate the data, and the
validity can not be tested.

2.1.5 Generalization limit


Topics in construction management research can be structured through inexhaustible
combinations of sub-themes in some common domains. Taking procurement as an
example, it is not too difficult to choose a topic, which has not been researched
extensively, through a combination of selection methods, contracting methods, risk
allocation, payment methods, study locations, perspective of parties, etc. The more
themes and the more subtle classifications of each theme, the more potential topics
can be conceived. To obtain original findings is thus not too difficult, if enough
original data can be retrieved from respondents for a specific topic in a particular
scenario. However, the generalization problem may arise.

The over-generalization problem is not only a concern in construction management


research, but also in other social science disciplines. For example, Sayer (1992)
argues that, in the geography discipline, positivism neglects contextual effects or
tends to infer too much from spatially-indentified generalizations and causal laws.
Smyth and Morris (2007) believe management, especially project management, faces
a similar situation. As Crawford (2004) argues, the trouble is determining at what
point such knowledge becomes so generalized that it is of limited value, and at what
point it is so specific that it is no longer generalisable.

11

2.2 THE PARADIGM DEBATE


2.2.1 Disciplinary paradigm
Although soft original data has weaknesses mentioned above, it does not mean it
should be (or more exactly, can be) avoided in construction management research. In
many cases, soft data is necessary due to the lack of hard data. Hard data sometimes
cannot provide everything needed in research and in many cases is a little out of date,
or even different from the reality. In some comparative research, a common
measurement may be absent across different cultures or jurisdictions, and the same
jargon may have different meanings in diverse environments. Then the researcher
needs to construct her own measurement frame, and collect data accordingly.

Hempel (1966) indicates that empirical facts or findings can be qualified as logically
relevant or irrelevant only with reference to a given hypothesis, since it is impossible
to collect all the relevant data without knowledge of the hypotheses or research
questions. Thus, it is not unreasonable for researchers to collect original data for
specific research purposes. The problem is that the data collected is, at least to some
extent, soft in our research, as maybe so in most social science research, when
compared to data used in the natural sciences. It seems there are three ways to address
this problem: 1) using hard data instead, or for triangulation; 2) applying special
approaches and tools to obtain data that is as objective and representative as possible
and exclude incidental effects or aberrations through complex statistical analysis; 3)
confess the limitations or even transfer to another paradigm when using such data.

The first approach cannot be extended to all construction management research as in


Hempels discussion above. The second approach is becoming routine, and as early as
almost fifty years ago a leading philosopher in science, Abraham Kaplan (1964)
criticized that behavioural science for often having an unhealthy fixation on
12

methodology, and that substance gives way to form. The third approach can also
become a routine in acknowledging limitations, or generate heated debates on how to
use such data.

What data to use and how to use them can highly depend upon disciplinary traditions.
Kuhn (1962) noted that different academic disciplines are characterized, to different
extents, by the presence of paradigms that prescribe the appropriate problems of study
and the validity of methodologies to be employed. Lodahl and Gordon (1972) used
Kuhns idea to initiate the disciplinary paradigm development model. Biglan (1973)
developed the model to place disciplines on a continuum from hard to soft, and also
from pure to applied, as shown in Figure 2-1. In the hard/soft continuum, natural
science is located on the far left hand side, the social science in the centre, and
humanities and arts on the far right hand side. This reflects a progressive relaxation of
paradigmatic requirements and the increasing level of personal inputs by individual
scholars in research (Chynoweth, 2009).

According to this model, management topics are applied ones and paradigmatic
requirements for this kind research do not need to be strictly rigorous. Thus,
individual input is allowed, at least to some extent. However, it seems that the field of
construction management should not be deemed as merely a management domain.
Hughes (1994) suggests considering it as a source of problems and data where basic
theories from mainstream disciplines can be applied. At least, technology, economics,
law and design are highly related to construction management research, from the
perspective of the Built Environment fields (Chynoweth, 2009). Thus, construction
management research should not be dominated by the disciplinary paradigm.

Figure 2-1 The disciplinary paradigm development model

13

Source: Initiated by Lodahl and Gordon (1972), developed by Biglan (1973), and used
by Chynoweth (2008, 2009) recently in Built Environment research

2.2.2 Paradigm debates in construction management


research
There has been a heated methodological debate in our discipline from the mid 1990s.
This not only involves superficial objections to soft data, but what data to use and
how to use them in research, which refers to different ontological and epistemological
assumptions and represented by different research paradigms.

Although the debaters manifest substantial differences in their suggestions on how


research should be done, it seems most participants do not support constructing
objective theories primarily through soft data. In fact, considering the diversity of
topics in this field, a view of methodological pluralism may be more appropriate
14

(Dainty, 2008). Assuming what should be avoided is more credible than articulating
what should be done as a dogma or monopolistic approach.

Seymour et al. (1997) criticize that the data have already been subjected to a
sophisticated and unexamined process of preparation before the reported research
commences. The core of their argument is not that the data should be collected more
objectively and explicitly, but data in social science research is inevitably soft. It
stems from the distinction that a research object in natural science is an objective
entity being studied out there, while human beings studied in social science are
capable of reporting on their own activities (Seymour and Rooke, 1995). So they do
not consider issues in soft data, such as subjective bias, repeat difficulty and
generalization limits to be problematic. They consider it as unavoidable. Their opinion
is how might we explain why things go wrong without asking those involved
(Seymour et al., 1998).

Thus, they argue that 1) objectivity is a problematic concept in social science studies;
2) the determination of meaning should be the primary goals in such research; and 3)
formal methods and procedures have significant limitations (Rooke and Kagioglou,
2007). They suggest that a transfer from a rationalist paradigm to an interpretative
paradigm is needed in construction management research (Rooke et al., 1997;
Seymour et al., 1997; Seymour et al. 1998). The basic logic is that in social science
research, objectivity does not exist, soft data is allowed and unavoidable, cause-effect
relationships should not be objectively constructed by these data, but an
interpretation of these data (phenomena) can be achieved.

Chau et al. (1998) think that 1) Seymour et al. appear to be referring to data from
informants and respondents, presumably by questionnaires; but it is just a narrow
view of data sources, since hard data such as cost, time and contract conditions also
exist; 2) even when the soft data are used, the unexamined process of preparation

15

referred to can be made explicit.

Other researchers argue that research should not be data-led, but theory-led. They
insist that without a theory (or theories), a research is not research (e.g. Harriss,
1998).

Runeson (1997) argue that a typical example of not very meaningful

research, is the popular topic of establishing why things go wrong on construction


projects by asking construction managers what they think the reasons are. There is no
theory either being tested or developed.

Chau et al. (1998) believe both approaches can contribute to knowledge, but they play
different roles in a knowledge circle, stating that a more useful and constructive
conception would be to argue that the interpretative approaches used to investigate
CM provide useful information for identification and conceptualization of the
problem, which subsequently may be theorized and subject to further investigation .
Thus they suggest an interpretative research should be deemed as initiatives in
generating knowledge, and with more and more understanding, research must move
on to generalizable scientific investigation.

But what constitutes knowledge and theoretical contribution is not clearly discussed
in their debates. Each group takes its own ontological and epistemological
assumptions for granted, or claim what constitutes knowledge is still an unsolved
philosophical issue (Chau et al., 1998). A brief reference to the discussion of
interpretative (essentially qualitative) and rationalist (essentially quantitative)
approaches in social science context is useful.

2.2.3 Interpretative versus rationalist


In the debates in our discipline, the interpretative and rationalist approaches are
considered as qualitative and quantitative/positivist approaches (Runeson, 1997;
16

Raftery, 1997), although Chau et al. (1998) argue that interpretative is qualitative
while rationalist is usually but not necessarily quantitative. To avoid the argument
trapped in the obscure jargon definitions and taxonomies, a comparison between
quantitative and qualitative approach is provided in Table 2-1. This table shows that
there exist substantial differences between these two approaches from ontological,
epistemological and methodological perspectives.

Table 2-1 Ontological, methodological and epistemological differences between


quantitative and qualitative research
Quantitative
Qualitative
Can also be Empirical-analytic
Interpretative (Bernstein, 1976);
described

Naturalistic, inductive, relativist

as (Bernstein, 1976);
Varieties

or include

of

positivism (Moss, 1990);


Phenomenological,

(Philips, 1983)

hermeneutical, experimental,
Fundamental

dialectic (Hathaway, 1995)


The object under study is Knowledge comes from human

assumption

separate from, unrelated to, experience,

reality

by

is

independent

of,

and constructed

those

unaffected

by

the participating in it (Howe, 1985;

researcher (Eisner, 1981)


Jacob, 1988)
to Objectively study data Become part of the situation by

Approach
knowledge
Aim

and

generated by the situation understanding

views

from

(Hathaway, 1995)
participants (Hathaway, 1995)
of To generalize from the To articulate one interpretation

inquiry

particular to construct a set of reality (Kent, 1991)


of theoretical statements
which
applicable

are

widely
(Firestone,

Researchers

1987)
Onlooker;

role

Personal bias need to be

Actor;

17

avoided
The data to be Researcher
analyzed

preselect

needs
a

to No

set

intentionally

of categories

categories to guide the researcher


inquiry (Firestone, 1987);
The

categories

derived

from

can

to

prescribed

constrain

(Denzin,

the
1971;

Eisner, 1981; Howe, 1988);

be Data and categories emerge

personal simultaneously with successive

beliefs or experience, from experience (McCracken, 1988);


theoretical formulation, or
from former interpretative
research

(McCracken,

1988);
Only data appertaining to
hypotheses,
phrased

which
by

are
those

preselected categories, will


be collected (Howe, 1985).
Note: the above table draws on information from Hathaway (1995).

The attack on the rationalism and a favour for the interpretative approach, in fact
originated from other social science disciplines (e.g. Hamel and Prahalad, 1994).
Some researchers argue that interpretative approach is scientific (e.g. Stevenson and
Cooper, 1997; Sherrard, 1997), while some others insist it is not scientific (e.g.
Morgan, 1996). But most researchers now agree that both approaches can contribute
to knowledge. They mainly fall into three groups in discussing whether the two
approaches can be (or should be) combined, i.e. the purist, the situationalists, and the
pragmatists (Rossman and Wilson, 1985). The purists argue that the qualitative and
quantitative approaches should not be combined because the grounding philosophies
are so divergent (e.g. Guba, 1987; Smith and Heshusius, 1986); the situationalists
suggest that the choice of method is partially determined by the nature of research and

18

they alternate between method choices (Rossman and Wilson, 1985); while
pragmatists views the two approaches capable of simultaneously bringing to bear both
of their strengths (Hathaway, 1995). It seems that the construction management
researchers are mainly situationalists and pragmatists. Even amidst the heated
paradigm debate, they still admit that construction management research should not
be governed by a research approach monopoly.

It should be noted that whether the data is soft, is not decided by what approach is
used. It is only decided by the nature of data itself. For example, questionnaire survey
and statistical analysis of responses are generally considered as quantitative approach
(Hathaway, 1995; Dainty, 2007), but responses from surveys are considered to be soft
data (Chau et al., 1998). On the other hand, the assumption of unavoidable reliance on
soft data is one base for interpretative approaches.

The primary criticism to interpretative approaches is that it only provides a post fact
explanation, and lacks a generalization to build or test a theoretical statement. For
example, the paradigms of ethonomethodology and symbolic interactionism have
been criticized quite heavily within sociology, due to their rejection of generalizable
theory (Harriss, 1998). This default of generalizability and testability always expose
interpretative approaches to be criticized as non-scientific, although there are already
many examples of good qualitative research.

On the other hand, a routine procedure of quantitative approach produces research


work like products in pipelines, which has mobilized criticism from researchers in
both the qualitative side (e.g. Seymours papers discussed above) and the
quantitative side (e.g. Raftery, 1998). However, research that make ten key factors
findings and construct relationships among them through soft data, were quite popular
and are still common. A brief reference to knowledge generation and what does theory
mean is necessary, in discussing whether these research exercises make a theoretical

19

contribution and whether the constructed theories are convincing.

2.3 THEORY, MODEL AND KNOWLEDGE CIRCLE


2.3.1 The research wheel
Shoemaker et al. (2004) argue what differentiates law, theory and hypothesis is the
sufficiency of evidence, as indicated in Table 2-2.

Table 2-2 Different extents of evidence support for law, theory and hypothesis
Science Jargon
Extent of Evidence Support
Law
Never been successfully challenged
Theory
With considerable evidence but not complete uniformity of
findings, such as the theory of evolution.
Hypothesis
To be tested.
(Source: Shoemaker et al., 2004)

In research, such evidence is data. Although Cartesians believe that people can
develop explanatory theories of science purely through reasoning, Empiricists occupy
the research dominance nowadays in most research disciplines and believe that
empirical evidence is essential to determine the validity or falsity of a scientific
theory.

Aristotle summarized two processes of research: inductive and deductive processes.


The inductive process moves mainly from data to viewpoints while the deductive
process moves mainly from viewpoints to data. Rudestam and Newton (2007) define
this process as a wheel to indicate that research is not a closed system, and
Shoemaker et al. (2004) also state the spirit of scientific research is the continuous
self-question. Figure 2-2 visualizes these two processes and shows the role of data in
research.

20

Figure 2-2 The research wheel

However, CM researchers are not unanimous on which process constitutes knowledge


contribution. Chau et al. (1998) assure both processes contribute to knowledge
generation but play different roles. Since Hughes suggests CM is not an academic
discipline with its own research theories (Hughes, 2001), but a source of problems
and data, whereas solutions and approaches need to be based within established
academic disciplines (Hughes, 1997), it seems he at least does not advocate parochial
CM theory-building research. On the other side, Fellows and Liu (2002) claim that
deduction does not allow knowledge to be advanced, while induction is valuable in
extending current knowledge boundaries; and hint that specific CM theory research is
valuable since it may yield higher level information. The last approach seems to be
the base for the popular approach in claiming ten key factors findings and
postulating theories in the construction management domain.

One demerit of the research wheel model is that it does not differentiate soft and hard
data, which weighs differently in providing evidence. The qualitative approach
acknowledges that its purpose is to give an explanation to the phenomena (data)
within the context. So it has little intention to provide a generalized theoretical
statement and guidance for future activities. Relying on soft data is allowed and
unavoidable. The quantitative approach includes both the left and right semicircle. It
can be a theory-building approach in the inductive process or a theory/hypothesis21

testing approach in the deductive process. The strength of the evidence needs to be
considered. This logic is not substantially different from that used in the law courts to
support one partys arguments.

2.3.2 What does theory mean?


In the methodological debate, both sides presented the question of what theory means.
Chau et al. (1998) attribute the fuzziness of the debate to some fundamental
epistemological questions, such as what constitutes scientific knowledge, not being
addressed. They claim six assumptions underlie the scientific approach: 1) there is
some order in nature; 2) we can understand patterns in nature and in ourselves; 3)
relative knowledge, even if it is flawed and changing, is superior to ignorance; 4) all
natural phenomena have natural causes; 5) nothing is self-evident; 6) knowledge is
derived from experience (Chau et al., 1998). And they also point out research is a lot
more than merely finding out something new. However, it seems they still do not
answer what theory means and what constitutes theoretical contribution.

Some

researchers

suggest

that

theory

is

not

impractical,

nonessential,

incomprehensible, and platitudinous; it is simply ones understanding of how


something works (e.g. Shoemaker et al., 2004 in the general social science context;
Fellows and Liu, 2002 in the construction management discipline). The difference
between law, theory and hypothesis is the sufficiency of data support (Shoemaker et
al., 2004). However, this compartmentalization does not provide a clear distinction
among the three concepts and ignores their differences in nature. A result of this view
to theory is the prevalence of any selection of variables that shows a statistical
correlation or can be fitted into a regression model seems sufficient for a paper
(Runeson, 1997). In a book review for Construction Economics: a new approach
(Danny, 2004), Runeson (2004) comments there is always a temptation to dispense
with the hard theory and concentrate on soft topics, such as green markets,
22

partnering, the retail price index, sustainable economic growth, discounting and the
like. In another applied discipline, operation management, Schmenner et al. (2009)
also express the similar worry, that theories come but never go, so there are too many
theories but not enough understanding. Kaplan, an influential social science
philosopher, pointed out as early as in 1964, that the predicament of behavioural
science is not the absence of theory but its proliferation (Kaplan, 1964).

Hypotheses should be deemed as provisional guesses or suspicions, which explain


existing data and guide the collection of further data for testing. As hypotheses are
supported by more and more evidence, especially evidence of different kind, they can
often be organized into laws (Hempel, 1966). Laws are the precise descriptions of
observed and supported regularities (Schmenner and Swink, 1998). After a system of
uniformities has been revealed or even when the form of empirical laws can be clearly
expressed, theories are introduced, seeking to explain those regularities and to afford a
deeper and more accurate understanding of the phenomena in question (Hempel,
1966).

The above does not mean that, without theory, a discovery of a relationship among
variables cannot be regarded as contribution to knowledge. For example, Keplers
laws of planetary motion and Boyles law of gases were accepted as laws well before
there were theories to explain why they work as they do (Schmenner and Swink,
1998). Another example is that although we have accepted Einsteins theory of massenergy inter-transformation, and have applied it in the use of nuclear power, there still
lacks a theory clearly explaining why mass can be transformed to energy through
annihilation.

Management researchers may prefer a more comprehensive view to what constitutes a


theoretical contribution (e.g. Whetten, 1989). Dubin (1978) and Whetten (1989)
suggest a complete theory must contain four essential elements, as follows:

23

1 What. This refers to which factors (variables, concepts, and constructs, etc.) should
be considered as part of the explanation of the phenomena being investigated
(Whetten, 1989). The numerous construction management research exercises yielding
interview/questionnaire-based ten key factors findings seem to be of this kind. A
good theory needs to include substantially relevant factors as much as possible
(comprehensiveness) while excluding trivial factors that add little or no additional
value to the understanding (parsimony) (Whetten, 1989).

2 How. After the identification of a set of important factors, a researcher needs to


answer How are they related? What and how can already provide a testable
statement. However, discussion at this level is only empirical rather than theoretical.
Poincare (1983) notes that Science is facts, just as houses are made of stoneBut a
pile of stones is not a house, and a collection of facts is not necessarily science. That
is why construction management researchers may criticize themselves on their lack of
theories in soft data induction research.

3 Why. This building block of theory answers the question of what are the underlying
dynamics, which can be psychological, economic, social or the like, that justify the
selection of factors and the proposed causal relationships (Whetten, 1989). What and
how are merely descriptions, while why provides explanation. The essential
ingredients of a simple theory should include both parts: description and explanation.

4 Who, when and where. In a quantitative approach, these conditions place limitations
on the propositions generated from a theoretical model. In a qualitative approach,
these contextual conditions decide the meaning (Gergen, 1982).

Both Dubin (1978) and Whetten (1989) argue that there is no substantial difference
between a model and a theory. So a reference to model can also cast some light on the
use of theory.

24

2.3.3 The use of models/theories


A model is not a precise reflection of reality, it is just an idealized representation of
what is being studied (Raftery, 1998). Tate and Jones (1975) suggest that a model is
a representation of reality made sufficiently explicit for one to be able to examine the
assumptions embodied within it, to manipulate it and experiment with it, and, most
important of all, to draw inferences from it which can be applied to reality. Thus, the
use of model, or the use of theory, is to simplify complex realities to an
understandable and manageable form. This relationship of real world systems,
systems being selected to study, and model/theory can be illustrated in Figure 2-3.

Figure 2-3 Model (theory) and reality

Source: Taha, 1971; Raftery, 1998


From different ontological perspectives, the real world system can be the objective
existence out there, or what is constructed by human beings, or a combination of
both. Whether it can be fully understood is an issue of epistemology. Figure 2-3 hints
that human beings understanding is, and should be, a simplified system, which loses
many of the minutiae of reality, but hopefully retains the general form (Raftery, 1998).
The models can be classified, based on their structural differences, into four types, i.e.
iconic, analogue, symbolic (Churchman et al., 1957), and conceptual (Tate and Jones,

25

1975).

Based on different extent of understanding to the real world, three types of decisions
can be made, i.e. intuitive, programmed and analytical (Bunn, 1984). If we already
know what to do, then we can make intuitive decisions without conscious analysis; if
no answer is immediately obvious, but we have a set of criteria, guidelines or some
other instructions from former knowledge, then we may make programmed decisions;
if the knowledge is not enough to provide an intuitive and programmed knowledge,
then we need to analyze the problem to forecast the consequences of possible actions,
assess them and then make analytical decisions (Raftery, 1998). When facing new
conditions for an old problem, the decisions being made may reverse from intuitive,
back to programmed or analytical (Raftery, 1998).

2.3.4 Induction vs. deduction


2.3.4.1 Induction
The research wheel model indicates that the knowledge generation circle includes an
induction semicircle and a deduction semicircle. Since a qualitative approach aims at
providing one explanation of the phenomena, rather than a generalized theoretical
statement, it seems this paradigm does not relate to induction and deduction. Thus,
these two concepts are more pertinent when it comes to the quantitative approach.

Relying on original data to make conclusions seems to be an induction mode. This


approach is quite popular in construction management research, and some CM
researchers (e.g. Fellows and Liu, 2002) even claim that new knowledge can only
come from induction. However, from the philosophy of (natural) science, there is no
right form of inductive logic. Theories, or even hypotheses, are not mechanically
constructed by empirical data, but invented by them and ingenuity through a happy
guess (Hempel, 1966). So any bold conjecture is allowed, but its validity is subject to
26

and guaranteed by further test. As Hempel (1966) suggest, Science knowledge is not
arrived at by applying some inductive inference procedure to antecedently collected
data, but rather by what is often called the method of hypothesis, by inventing
hypotheses as tentative answers to a problem under study, and then subjecting these
to empirical test.

Popper (1959) argues that data can be used to falsify hypotheses/theories, but cannot
prove them. His illustration is that a new white goose cannot prove that all geese are
white, but a black goose can disprove it. This logic can be illustrated by the
following deductive forms, provided by Salmon (1965) and Hempel (1966).

1. Deductively invalid forms:


(1)

Hypothesis:

If H is true, then so is I.

Evidence (data):

I is true.

Invalid conclusion:

(2)

Hypothesis:

H is true.

If H is true, then so are I1, I2 In.

Evidence (data):

I1, I2 In are all true.

Invalid conclusion:

H is true.

2. Deductively valid forms:


(1)

(2)

Hypothesis:

If H is true, then so is I.

Evidence (data):

I is not true.

Valid conclusion:

H is not true.

Hypothesis:

If P, then Q.

Evidence (data):

It is not the case that Q.

Valid conclusion:

It is not the case that P.

Thus, the popular CM theory-building research should be questioned not only from

27

the perspective of what constitutes a theory as discussed in former sub-sections, but


also from the validity of theories constructed. The data only provides partial
corroboration or confirmation to the statement, but never a complete demonstration
(Hempel, 1966). When a new case of falsification arises, the existing theory should be
amended through new hypotheses to accommodate/include the new evidence. But a
theory will be abandoned if too much revision is needed, since science progress
through self-question rather than self-support (Hempel, 1966; Shoemaker et al.,
2004).

Data plays the role of evidence in this test, and different data weighs differently. Its
quantity, variety and precision need to be taken into account; and new supporting
evidence apparently has a strong effect (Hempel, 1966). In construction management
research, it is not uncommon to use interviews and questionnaires to identify the prior
categories of variables, and quantify them to construct relationships. Then case studies
will be used to confirm these statements. After applying textbook skills, these
findings will be claimed to be objective, representative and practical, and thus the
results may be generalized to theories to guide future actions. It seems to be a
pragmatic approach since both qualitative and quantitative data (approaches) are used.
However, all these kinds of data are soft and thus have similar weaknesses. If hard
data can be applied as a triangulation or confirmation, the strength of the evidence can
be substantially enhanced. Further original data such as from case studies, in fact can
only indicate that the interview/questionnaire-based statement can be applied in
practice, but can never prove that statement is right.

Hempel (1966) argues that While scientific inquiry is certainly not inductive in the
narrow sense, it may be said to be inductive in a wider sense, inasmuch as it involves
the acceptance of hypotheses on the basis of data that afford no deductively
conclusive evidence for it, but lend it only more or less strong inductive support or
confirmation. Whether the support is strong or not, depends on the strength of the

28

data and the inner reasoning logic from data to theory. In accepting a new suggested
hypothesis/theory, theoretical support may be as important as, or even more important
than, the data support. Hempel (1966) suggest that a statement of universal form,
whether empirically confirmed or as yet untested, will qualify as a law if it is implied
by an accepted theory; but even if it is empirically well confirmed and presumably
true in fact, it will not qualify as a law if it rules out certain hypothetical occurrences
which an accepted theory qualifies as possible.

2.3.4.2 Deduction
Although there is no general rule of induction, valid deductive reasoning logic exists
as indicated above. But deduction also has its weaknesses of auxiliary hypothesis
(Duhem, 1954) and infinite deductive statements (Hempel, 1966).

Auxiliary hypothesis form:


Hypothesis:

If both H and A are true, then so is I.

Evidence (data): I is not true.


Conclusion:

H and A are not both true.

When I is not true, the deductive form can safely disprove H or A, or both. However,
without a confirmation that H is right, one cannot disprove A. Thus, strictly construed,
such a crucial experiment is impossible in science (Duhem, 1954). Runeson (1997)
argues that many social science theories may be based on motivational assumptions,
which cannot be tested, falsified or improved through empirical data. Machlup (1978)
suggest that the validity of auxiliary assumptions restricts the testing of theories. This
has lead to a de fecto rejection of Poppers falsification idea (Runeson, 1997).

Hempel (1966) points out that for given any set of premises, valid deductions will be
infinite; thus, the conclusion cannot be deducted from premises. Hempel (1966)
further claims that the discovery of important, fruitful theorems requires inventive
ingenuity. However, contrary to inductive theory that can only be invented; the
29

deductive process can be seen as an application of general theories in specific fields.


This can be demonstrated as deductive-nomological explanation (Salmon, 1965):
L1, L2 Lr

(general laws)

C1, C2 Ck

(contextual conditions)

(explanation of contextual phenomena)

L is the general law, while C provides contextual conditions. Although no specific


conclusion can be exclusively deduced from general laws, it can be decided by the
combination of L and C. If we see construction management as an applied discipline,
then L can be considered as general theories/laws from mainstream disciplines while
C provides data sources and problems in the field of construction management. E is
the result of applying L in C.

The knowledge contribution is not a narrow approach of data-theory induction, as


suggested by some construction management researchers; it can be with L, C and E.
Hempel (1966) presents a pair of interesting comparative cases from astronomy, both
of which refer to the astronomer, Urbain Le Leverrier:

The first case:


L:

Newtonian theory

C (known):

Detected celestial bodies

C (to be tested): An undetected planet


E:

Explanation of the irregularities in the motion of Uranus

With the celestial bodies already found at that time, the motion of Uranus did not
conform to the Newtons law of gravity and motion. Newtonian theory was
considered as an unchallenged law at that time, so introducing a new condition C
seemed to be a good choice to explain E. Thus, Leverrier conjectured that the motion
irregularities of Uranus should result from the gravitational pull of an undetected
outer planet. He calculated the desired parameters of this potential planet, such as
30

position, mass and orbit. This prediction was strikingly confirmed by the discovery of
Neptune (Hempel, 1966)!
The second case:
L: Newtonian theory
C (known): Detected celestial bodies
C (to be tested): An undetected planet
E: Explanation of the motion of Mercury

L: Einsteins general theory of relativity


C (known): Detected celestial bodies
E: Explanation of the motion of Mercury

In this case, with the already detected celestial bodies, the motion of Mercury did not
conform to the Newtonian theory. Leverrier again conjectured that there should be a
very dense and small object between the sun and Mercury. But no such planet could
be found. Much later, with Einsteins general theory of relativity, E was successfully
explained by L and C (Hempel, 1966).

It should be noted that:


1) Both cases indicate knowledge contribution;
2) There are almost no induction elements in these two cases;
3) Data is hard;
4) Even if Newtonian theory has been falsified in the second case, it is still taught in
classrooms. The theory is a simplified version of reality. There is no need to use
theory of relativity in analyzing physical actions in our everyday life, although it
seems closer to reality.

31

2.4 RESEARCH DESIGN


As competing philosophical systems may be substantially different but have their
individual justifications from different perspectives, there is no right or wrong of the
research approaches by themselves. It is difficult to suggest a general statement of
how to do research. The methodological review cannot suggest the approach that
should dominate, but does provide a comprehensive understanding of what can be
done and its weaknesses. Chau et al. (1998) suggest that in construction management
research, the choice of research approach depends on the nature of the problem. It
seems to come back to the clich of choosing appropriate methods for specific
research. But the above review has illuminated what is appropriate. In the light of
the above, the purpose of the present research, the difficulties encountered and the
research design, are briefly introduced in the context of the above methodological
discussion.

Section 1.3 listed six specific objectives of this research in achieving the general
purpose to make an original contribution to the knowledge of payment problems in
the construction industry. This can be approached through three sets of questions: 1)
what are the payment problems and measures to address them; 2) why problems arise
and how these measures are expected to work; 3) within the context of Mainland
China construction industry, what are the factors and their relative ranking
contributing to payment problems.

The first difficulty in doing this research is the lack of uniform terminology (jargon)
to compare payment problems in different jurisdictions. Different expressions, such as
payment problems, financial difficulties, payment arrears, etc are used across
jurisdictions. The first task was to define a common scope and conceptual framework
so the international data can be comparable. The concept and scope of payment
arrears was chosen because 1) this is the core problem in Mainland China and also

32

exists in other jurisdictions, 2) hard data of legislation addressing this problem can be
solicited and compared across different jurisdictions, and 3) official hard data is
available in Mainland China.

The second difficulty is the data itself. Data of measures addressing payment arrears
can be obtained through the review of regulations in different jurisdictions; but
detailed data reporting the problem is not widely available internationally. An
international review indicated that the problem of payment arrears does exist globally.
Mainland China was chosen as the context to provide detailed data. Interviews and a
questionnaire survey were also conducted to solicit opinions about payment arrears
from practitioners in the Mainland China construction industry. Given the different
occupational positions and conflicting interests, one difficulty was to obtain honest
responses from clients, or even consultants, in such a topic that may shed lights on
their own bad practices. In interviews in Mainland China, most clients denied that
they have committed to payment arrears and thus felt there is no need to interview
them. In the questionnaire survey, only a small number of clients and consultants gave
feedback. So the opinions solicited were mainly from contractors and sub-contractors,
which may be skewed and exaggerate the problem, over-blame clients and exculpate
sellers themselves.

The review of problems and measures in international settings and Mainland China
provided hard data that must be properly understood. Since different groups of
stakeholders have different, even conflicting interests, using hard data to indicate the
existence of problems is neat and reliable. Theoretical models were then presented
under gametric reasoning, with basic motivational assumptions and educated guess
to develop explanations. They are applications of general laws and theories in the
domain of construction management, in the specific field of payment arrears in the
construction industry. It plays the role of how and why in the theoretical contribution.

33

Some sub-questions and sub-hypothesis were developed from the models, and
answered or tested through interviews and questionnaires. This part plays a dual role:
to support the suggested models through soft-data triangulation, and to derive some
original findings through a deductive approach under specific conditions. The soft
data was used not to construct a theory, but to support (or falsify) hard data based
models which conform to accepted knowledge. Furthermore, the findings were not
only derived from induction of soft data, but also with support from theoretical
models. The research design map is visualized in Figure 2-4.

Figure 2-4 The research design map

It should be noted that although the theoretical models can be applied in other
jurisdictions, specific findings of factors and their rankings contributing to payment
arrears in the Mainland China construction industry depends on the particular context.
Domain-specific regulations, culture, credit system, socio-economic level, etc are
specific conditions that impact on this deductive-nomological reasoning.

34

35

S-ar putea să vă placă și