Documente Academic
Documente Profesional
Documente Cultură
PhD students are expected to make an original knowledge contribution, and need to
justify their research design to convince their audiences including examiners that such
contribution is reliable. This should involve not only the technique depiction in
obtaining and analyzing data, but also the understanding of knowledge and how it is
generated/developed.
Hathaway (1995) criticizes that researchers generally make the method choice
decision with relative ease, choosing the method that will provide the needed
information, without giving much thought to the assumptions underlying research
methods. In our Construction Management discipline, this criticism is pertinent since
many doctoral dissertations misuse this methodology chapter as a place to merely
describe how they get data. Smyth and Morris (2007) contend that most authors fail to
make explicit their theoretical, epistemological or methodological positions. Rooke et
al. (1997) express their objection to incompetent researchers getting away with a
mechanistic application of formal procedure.
in the general context of research, this chapter will focus on discussions in our
discipline, arising from heated methodological debates, and refer to a more general
context if and when needed. It will 1) review the most popular research approach
adopted by construction management scholars; 2) review the methodological debates
in the construction management discipline in the recent decade; 3) contribute to this
discussion, with an emphasis on the relationship between data and knowledge
generation; 4) report the difficulties in doing this research; and 5) justify the basic
research design in this research.
2.1
RESEARCH
MANAGEMENT
IN
THE
DISCIPLINE:
CONSTRUCTION
SUMMARY
AND
CRITIQUE
2.1.1 Reliance on original data
Many scholars have reviewed papers in the research community of construction
management and economics, and investigated the paradigm and trend (e.g. Betts and
Lansley, 1993; Dainty, 2007; Hua, 2008; Pietroforte and Stefani, 2004). Generally,
they reviewed papers published in leading journals in the discipline, such as
Construction Management and Economics and the American Society of Civil
Engineers (ASCE) Journal of Construction Engineering and Management, which are
identified as two major academic journals that have been ranked first and second in an
international survey of construction management journals (Chau, 1997), or other
influential journals like International Journal of Project Management (e.g. Smyth and
Morris, 2007).
Philips and Pugh (1994) state that a qualified PhD work needs to say something useful
and novel that the research community wishes to hear. It seems that generating and
using original data will make it easier to develop original conclusions, although it is
not necessarily a prerequisite (Hughes, 1994). The approach of relying on original
data to announce findings and make conclusions in the construction management
research community was confirmed by Betts and Lansley (1993). They reviewed
papers published in the first ten years of the journal Construction Management and
Economics, i.e. from 1983 to 1992. They found that 1) seventy per cent of the papers
are based on original or nearly original data; 2) the basis for the papers draws almost
equally from reviews, case studies and empirical work.
Hughes (1997) also indicated some researchers seek to discover solutions to research
questions by asking practitioners what they do, with a view to codifying and
representing best practice. A popular topic is to establish why things go wrong on
construction projects by asking construction managers what they think the reasons are
(Runeson, 1997). Yung and Yip (2009) found that researchers often quantify variables
through informants responses, and then construct relationships among variables to
make conclusions. A literature review (Yung and Yip, 2009) shows the dominance of
opinion surveys as the means of data collection: out of 35 research exercises on
construction quality, 28 obtained information through questionnaire surveys, and 2
through case studies, while another 5 did not use data.
However, many researchers also criticize that the current problem is not the lack of a
leading theory for the whole discipline, which is over-ambitious as indicated above,
but no theory at all, despite some assertions in most of the research.
Betts and Lansley (1993), based on their review, claim that construction management
4
research is rather inward-looking, self-referential and lacking its guidance from and
contribution to theories. Runeson (1997) criticizes that the researchers appear to be
unaware of existing theories and there is no science at all, no theory so that any
selection of variables that shows a statistical correlation or can be fitted into a
regression model seems sufficient for a paper. Harriss (1998) criticizes that
construction management researchers have for too long ignored the centrality of
theory to human activity, and argues that a research without theory is not research.
data to make their conclusions. These original data are always from interviews,
questionnaire surveys, project-based case studies, and Delphi surveys, etc, which are
conducted by and only by researchers themselves. This raises a concern about
subjective bias.
The basic logic of the above approach is that honest answers to well-designed
questions from an appropriate sample can reflect the situation of the whole population
under research. Thus, a conclusion from questionnaires/interviews/case studies can be
generalized to the population. Many textbooks have taught how to get objective and
representative responses through careful design, such as different measures of
sampling and subtle design of questionnaires (e.g. Bradburn, 2004; Fellows and Liu,
2002; Gillham, 2000), complemented by complex statistical analysis (Walonick,
2003). But it cannot be denied that the nature of this kind of data is soft.
It is difficult to give accurate definitions to hard data and soft data. As subjectivity
cannot be totally excluded from research processes, no (or little) data is 100 percent
hard. Even if the data source is absolutely hard (objective), the decision to use that
specific set of data for the research is still made by (subjective) researchers. However,
soft data are more vulnerable to subjective influences. When comparing two data
sources, such as national statistical data and a Delphi survey, it is not difficult to
identify which one is more inclined to be soft. This refers to the vulnerability of the
data collection process to be influenced by human factors, but does not mean that
every set of hard data is more reliable than every soft data set. For example, one case
of questionable hard data is that the sum of all regional GDPs in Mainland China, as
issued by each local Government, is always larger than that of the whole country,
issued by the national statistics bureau.
Seymour et al. (1997) criticize that the data, which are often treated with explicit
mathematical analyses, have already been subjected to a sophisticated and
In applying the above data collection methods, the research can be seen as a
communication process between researchers and respondents who share information
over time to converge upon a mutual understanding (Ruesch and Bateson, 1968). It
seems to be a two-way, non-linear interaction, rather than a one-way, one-off flow of
information (Loosemore, 1999; Loosemore and Tan, 2000). The benefit of this nonlinear communication includes greater convergence, flexibility, emotion transmitting,
personal relationships and silent language, etc (Loosemore, 1999); however, the
subjectivity seems unavoidable and the validity of data becomes questionable.
The problem can arise from both respondents and researchers. First, a potential for
bias can arise from peoples association with a particular group (Loosemore and Tan,
2000). Lawson (1979) indicated that different methods and explanations can be
developed for the same phenomenon by different groups, say architects and engineers.
Occupations can also influence the way to investigate and solve problems (Cann,
1990; Pierre et al., 1996). Further, distinct cultures, which are developed from
different occupations, produce different mind-sets and ways of seeing and interpreting
the world (e.g. Manis, 1996). The construction industry involves distinct occupational
groups, such as clients, designers, architects/engineers, contractors, sub-contractors,
and material suppliers. This occupational difference constitutes a strong source of
cultural differentiation (Bowley, 1966; Bennett and Wittaker, 1994; Munns, 1996).
opinions from different groups are substantially different, even opposite; and Rahman
(2003) found that clients and contractors have contrary opinions on the desired
allocation of risks.
Thus, in a topic involving conflicting interests among different groups, the validity of
this kind of data is, at least to some extent, questionable. On the other hand, in a topic
of how to do things better where common goals are shared by different parties,
responses from different groups may reveal something constructive. Whether the data
collected is reliable is thus highly influenced by the research topic. In this sense,
Hughes (1997) pointed that asking practitioners what they do cannot produce counterintuitive, unexpected ideas, upon which our understanding of construction
management may depend.
The other source of potential detriment to the validity of data comes from researchers.
This may involve the choice of respondents, the selection and filtration (intended or
not) of information, and the quality of questions soliciting data.
Also, in much CM research, questionnaires are sent to informants both known (always
recognized in seminars, conferences or other occasions so a common interest or even
opinion can be expected) and unknown (always randomly selected from a name-list of
a construction industry organization) to researchers. It is not unreasonable to expect
higher response rates from people who are known to the researcher. But little CM
research reports this difference and the consequential potential bias, and analyzes the
information from these two sources separately.
Another problem of the soft data is that its validity depends on the subjective
judgement of different researchers. Two recent cases are Phua and Rowlinson (2004),
Zhang and Liu (2006) who relied upon questionnaire surveys to analyze national
cultural differences in the construction industry. However, Rooke and Kagioglou
(2007) criticize this approach and allege that they designed the questionnaires without
direct knowledge of the activity under study, so the questionnaires in the above two
research exercises are allegedly irrelevant, misleading or meaningless. Rooke and
Kagioglow (2007) suggest a Unique Adequacy requirement (UA) as a means of
evaluating research. This comprises 1) a weak form which demands that the
researcher is competent in the research setting, and 2) a strong form that demands that
research reports use only concepts originating within the research setting. It seems
their suggestions are presented in the context of a qualitative approach rather than a
quantitative approach.
However, it seems that much construction management research cannot pass this test
of replicability. Also, this failure is closely related to the repeat difficulty of the data.
Langford (2009) confesses that many of the studies undertaken in the construction
management discipline are not subject to replication. Loosemore and Tan (2000) note
that replication is not a discernable habit of the construction management research
community, and attribute this to the methods used being not easy to replicate. Harriss
(1998) criticizes that much construction management research proclaims knowledge
through survey methods, but there is no guarantee that the next observation will not
differ from the former.
Kong were found. The purpose is to compare, rather than replicate the data, and the
validity can not be tested.
11
Hempel (1966) indicates that empirical facts or findings can be qualified as logically
relevant or irrelevant only with reference to a given hypothesis, since it is impossible
to collect all the relevant data without knowledge of the hypotheses or research
questions. Thus, it is not unreasonable for researchers to collect original data for
specific research purposes. The problem is that the data collected is, at least to some
extent, soft in our research, as maybe so in most social science research, when
compared to data used in the natural sciences. It seems there are three ways to address
this problem: 1) using hard data instead, or for triangulation; 2) applying special
approaches and tools to obtain data that is as objective and representative as possible
and exclude incidental effects or aberrations through complex statistical analysis; 3)
confess the limitations or even transfer to another paradigm when using such data.
methodology, and that substance gives way to form. The third approach can also
become a routine in acknowledging limitations, or generate heated debates on how to
use such data.
What data to use and how to use them can highly depend upon disciplinary traditions.
Kuhn (1962) noted that different academic disciplines are characterized, to different
extents, by the presence of paradigms that prescribe the appropriate problems of study
and the validity of methodologies to be employed. Lodahl and Gordon (1972) used
Kuhns idea to initiate the disciplinary paradigm development model. Biglan (1973)
developed the model to place disciplines on a continuum from hard to soft, and also
from pure to applied, as shown in Figure 2-1. In the hard/soft continuum, natural
science is located on the far left hand side, the social science in the centre, and
humanities and arts on the far right hand side. This reflects a progressive relaxation of
paradigmatic requirements and the increasing level of personal inputs by individual
scholars in research (Chynoweth, 2009).
According to this model, management topics are applied ones and paradigmatic
requirements for this kind research do not need to be strictly rigorous. Thus,
individual input is allowed, at least to some extent. However, it seems that the field of
construction management should not be deemed as merely a management domain.
Hughes (1994) suggests considering it as a source of problems and data where basic
theories from mainstream disciplines can be applied. At least, technology, economics,
law and design are highly related to construction management research, from the
perspective of the Built Environment fields (Chynoweth, 2009). Thus, construction
management research should not be dominated by the disciplinary paradigm.
13
Source: Initiated by Lodahl and Gordon (1972), developed by Biglan (1973), and used
by Chynoweth (2008, 2009) recently in Built Environment research
(Dainty, 2008). Assuming what should be avoided is more credible than articulating
what should be done as a dogma or monopolistic approach.
Seymour et al. (1997) criticize that the data have already been subjected to a
sophisticated and unexamined process of preparation before the reported research
commences. The core of their argument is not that the data should be collected more
objectively and explicitly, but data in social science research is inevitably soft. It
stems from the distinction that a research object in natural science is an objective
entity being studied out there, while human beings studied in social science are
capable of reporting on their own activities (Seymour and Rooke, 1995). So they do
not consider issues in soft data, such as subjective bias, repeat difficulty and
generalization limits to be problematic. They consider it as unavoidable. Their opinion
is how might we explain why things go wrong without asking those involved
(Seymour et al., 1998).
Thus, they argue that 1) objectivity is a problematic concept in social science studies;
2) the determination of meaning should be the primary goals in such research; and 3)
formal methods and procedures have significant limitations (Rooke and Kagioglou,
2007). They suggest that a transfer from a rationalist paradigm to an interpretative
paradigm is needed in construction management research (Rooke et al., 1997;
Seymour et al., 1997; Seymour et al. 1998). The basic logic is that in social science
research, objectivity does not exist, soft data is allowed and unavoidable, cause-effect
relationships should not be objectively constructed by these data, but an
interpretation of these data (phenomena) can be achieved.
Chau et al. (1998) think that 1) Seymour et al. appear to be referring to data from
informants and respondents, presumably by questionnaires; but it is just a narrow
view of data sources, since hard data such as cost, time and contract conditions also
exist; 2) even when the soft data are used, the unexamined process of preparation
15
Other researchers argue that research should not be data-led, but theory-led. They
insist that without a theory (or theories), a research is not research (e.g. Harriss,
1998).
Chau et al. (1998) believe both approaches can contribute to knowledge, but they play
different roles in a knowledge circle, stating that a more useful and constructive
conception would be to argue that the interpretative approaches used to investigate
CM provide useful information for identification and conceptualization of the
problem, which subsequently may be theorized and subject to further investigation .
Thus they suggest an interpretative research should be deemed as initiatives in
generating knowledge, and with more and more understanding, research must move
on to generalizable scientific investigation.
But what constitutes knowledge and theoretical contribution is not clearly discussed
in their debates. Each group takes its own ontological and epistemological
assumptions for granted, or claim what constitutes knowledge is still an unsolved
philosophical issue (Chau et al., 1998). A brief reference to the discussion of
interpretative (essentially qualitative) and rationalist (essentially quantitative)
approaches in social science context is useful.
Raftery, 1997), although Chau et al. (1998) argue that interpretative is qualitative
while rationalist is usually but not necessarily quantitative. To avoid the argument
trapped in the obscure jargon definitions and taxonomies, a comparison between
quantitative and qualitative approach is provided in Table 2-1. This table shows that
there exist substantial differences between these two approaches from ontological,
epistemological and methodological perspectives.
as (Bernstein, 1976);
Varieties
or include
of
(Philips, 1983)
hermeneutical, experimental,
Fundamental
assumption
reality
by
is
independent
of,
and constructed
those
unaffected
by
Approach
knowledge
Aim
and
views
from
(Hathaway, 1995)
participants (Hathaway, 1995)
of To generalize from the To articulate one interpretation
inquiry
are
widely
(Firestone,
Researchers
1987)
Onlooker;
role
Actor;
17
avoided
The data to be Researcher
analyzed
preselect
needs
a
to No
set
intentionally
of categories
categories
derived
from
can
to
prescribed
constrain
(Denzin,
the
1971;
(McCracken,
1988);
Only data appertaining to
hypotheses,
phrased
which
by
are
those
The attack on the rationalism and a favour for the interpretative approach, in fact
originated from other social science disciplines (e.g. Hamel and Prahalad, 1994).
Some researchers argue that interpretative approach is scientific (e.g. Stevenson and
Cooper, 1997; Sherrard, 1997), while some others insist it is not scientific (e.g.
Morgan, 1996). But most researchers now agree that both approaches can contribute
to knowledge. They mainly fall into three groups in discussing whether the two
approaches can be (or should be) combined, i.e. the purist, the situationalists, and the
pragmatists (Rossman and Wilson, 1985). The purists argue that the qualitative and
quantitative approaches should not be combined because the grounding philosophies
are so divergent (e.g. Guba, 1987; Smith and Heshusius, 1986); the situationalists
suggest that the choice of method is partially determined by the nature of research and
18
they alternate between method choices (Rossman and Wilson, 1985); while
pragmatists views the two approaches capable of simultaneously bringing to bear both
of their strengths (Hathaway, 1995). It seems that the construction management
researchers are mainly situationalists and pragmatists. Even amidst the heated
paradigm debate, they still admit that construction management research should not
be governed by a research approach monopoly.
It should be noted that whether the data is soft, is not decided by what approach is
used. It is only decided by the nature of data itself. For example, questionnaire survey
and statistical analysis of responses are generally considered as quantitative approach
(Hathaway, 1995; Dainty, 2007), but responses from surveys are considered to be soft
data (Chau et al., 1998). On the other hand, the assumption of unavoidable reliance on
soft data is one base for interpretative approaches.
The primary criticism to interpretative approaches is that it only provides a post fact
explanation, and lacks a generalization to build or test a theoretical statement. For
example, the paradigms of ethonomethodology and symbolic interactionism have
been criticized quite heavily within sociology, due to their rejection of generalizable
theory (Harriss, 1998). This default of generalizability and testability always expose
interpretative approaches to be criticized as non-scientific, although there are already
many examples of good qualitative research.
19
Table 2-2 Different extents of evidence support for law, theory and hypothesis
Science Jargon
Extent of Evidence Support
Law
Never been successfully challenged
Theory
With considerable evidence but not complete uniformity of
findings, such as the theory of evolution.
Hypothesis
To be tested.
(Source: Shoemaker et al., 2004)
In research, such evidence is data. Although Cartesians believe that people can
develop explanatory theories of science purely through reasoning, Empiricists occupy
the research dominance nowadays in most research disciplines and believe that
empirical evidence is essential to determine the validity or falsity of a scientific
theory.
20
One demerit of the research wheel model is that it does not differentiate soft and hard
data, which weighs differently in providing evidence. The qualitative approach
acknowledges that its purpose is to give an explanation to the phenomena (data)
within the context. So it has little intention to provide a generalized theoretical
statement and guidance for future activities. Relying on soft data is allowed and
unavoidable. The quantitative approach includes both the left and right semicircle. It
can be a theory-building approach in the inductive process or a theory/hypothesis21
testing approach in the deductive process. The strength of the evidence needs to be
considered. This logic is not substantially different from that used in the law courts to
support one partys arguments.
Some
researchers
suggest
that
theory
is
not
impractical,
nonessential,
partnering, the retail price index, sustainable economic growth, discounting and the
like. In another applied discipline, operation management, Schmenner et al. (2009)
also express the similar worry, that theories come but never go, so there are too many
theories but not enough understanding. Kaplan, an influential social science
philosopher, pointed out as early as in 1964, that the predicament of behavioural
science is not the absence of theory but its proliferation (Kaplan, 1964).
The above does not mean that, without theory, a discovery of a relationship among
variables cannot be regarded as contribution to knowledge. For example, Keplers
laws of planetary motion and Boyles law of gases were accepted as laws well before
there were theories to explain why they work as they do (Schmenner and Swink,
1998). Another example is that although we have accepted Einsteins theory of massenergy inter-transformation, and have applied it in the use of nuclear power, there still
lacks a theory clearly explaining why mass can be transformed to energy through
annihilation.
23
1 What. This refers to which factors (variables, concepts, and constructs, etc.) should
be considered as part of the explanation of the phenomena being investigated
(Whetten, 1989). The numerous construction management research exercises yielding
interview/questionnaire-based ten key factors findings seem to be of this kind. A
good theory needs to include substantially relevant factors as much as possible
(comprehensiveness) while excluding trivial factors that add little or no additional
value to the understanding (parsimony) (Whetten, 1989).
3 Why. This building block of theory answers the question of what are the underlying
dynamics, which can be psychological, economic, social or the like, that justify the
selection of factors and the proposed causal relationships (Whetten, 1989). What and
how are merely descriptions, while why provides explanation. The essential
ingredients of a simple theory should include both parts: description and explanation.
4 Who, when and where. In a quantitative approach, these conditions place limitations
on the propositions generated from a theoretical model. In a qualitative approach,
these contextual conditions decide the meaning (Gergen, 1982).
Both Dubin (1978) and Whetten (1989) argue that there is no substantial difference
between a model and a theory. So a reference to model can also cast some light on the
use of theory.
24
25
1975).
Based on different extent of understanding to the real world, three types of decisions
can be made, i.e. intuitive, programmed and analytical (Bunn, 1984). If we already
know what to do, then we can make intuitive decisions without conscious analysis; if
no answer is immediately obvious, but we have a set of criteria, guidelines or some
other instructions from former knowledge, then we may make programmed decisions;
if the knowledge is not enough to provide an intuitive and programmed knowledge,
then we need to analyze the problem to forecast the consequences of possible actions,
assess them and then make analytical decisions (Raftery, 1998). When facing new
conditions for an old problem, the decisions being made may reverse from intuitive,
back to programmed or analytical (Raftery, 1998).
and guaranteed by further test. As Hempel (1966) suggest, Science knowledge is not
arrived at by applying some inductive inference procedure to antecedently collected
data, but rather by what is often called the method of hypothesis, by inventing
hypotheses as tentative answers to a problem under study, and then subjecting these
to empirical test.
Popper (1959) argues that data can be used to falsify hypotheses/theories, but cannot
prove them. His illustration is that a new white goose cannot prove that all geese are
white, but a black goose can disprove it. This logic can be illustrated by the
following deductive forms, provided by Salmon (1965) and Hempel (1966).
Hypothesis:
If H is true, then so is I.
Evidence (data):
I is true.
Invalid conclusion:
(2)
Hypothesis:
H is true.
Evidence (data):
Invalid conclusion:
H is true.
(2)
Hypothesis:
If H is true, then so is I.
Evidence (data):
I is not true.
Valid conclusion:
H is not true.
Hypothesis:
If P, then Q.
Evidence (data):
Valid conclusion:
Thus, the popular CM theory-building research should be questioned not only from
27
Data plays the role of evidence in this test, and different data weighs differently. Its
quantity, variety and precision need to be taken into account; and new supporting
evidence apparently has a strong effect (Hempel, 1966). In construction management
research, it is not uncommon to use interviews and questionnaires to identify the prior
categories of variables, and quantify them to construct relationships. Then case studies
will be used to confirm these statements. After applying textbook skills, these
findings will be claimed to be objective, representative and practical, and thus the
results may be generalized to theories to guide future actions. It seems to be a
pragmatic approach since both qualitative and quantitative data (approaches) are used.
However, all these kinds of data are soft and thus have similar weaknesses. If hard
data can be applied as a triangulation or confirmation, the strength of the evidence can
be substantially enhanced. Further original data such as from case studies, in fact can
only indicate that the interview/questionnaire-based statement can be applied in
practice, but can never prove that statement is right.
Hempel (1966) argues that While scientific inquiry is certainly not inductive in the
narrow sense, it may be said to be inductive in a wider sense, inasmuch as it involves
the acceptance of hypotheses on the basis of data that afford no deductively
conclusive evidence for it, but lend it only more or less strong inductive support or
confirmation. Whether the support is strong or not, depends on the strength of the
28
data and the inner reasoning logic from data to theory. In accepting a new suggested
hypothesis/theory, theoretical support may be as important as, or even more important
than, the data support. Hempel (1966) suggest that a statement of universal form,
whether empirically confirmed or as yet untested, will qualify as a law if it is implied
by an accepted theory; but even if it is empirically well confirmed and presumably
true in fact, it will not qualify as a law if it rules out certain hypothetical occurrences
which an accepted theory qualifies as possible.
2.3.4.2 Deduction
Although there is no general rule of induction, valid deductive reasoning logic exists
as indicated above. But deduction also has its weaknesses of auxiliary hypothesis
(Duhem, 1954) and infinite deductive statements (Hempel, 1966).
When I is not true, the deductive form can safely disprove H or A, or both. However,
without a confirmation that H is right, one cannot disprove A. Thus, strictly construed,
such a crucial experiment is impossible in science (Duhem, 1954). Runeson (1997)
argues that many social science theories may be based on motivational assumptions,
which cannot be tested, falsified or improved through empirical data. Machlup (1978)
suggest that the validity of auxiliary assumptions restricts the testing of theories. This
has lead to a de fecto rejection of Poppers falsification idea (Runeson, 1997).
Hempel (1966) points out that for given any set of premises, valid deductions will be
infinite; thus, the conclusion cannot be deducted from premises. Hempel (1966)
further claims that the discovery of important, fruitful theorems requires inventive
ingenuity. However, contrary to inductive theory that can only be invented; the
29
(general laws)
C1, C2 Ck
(contextual conditions)
Newtonian theory
C (known):
With the celestial bodies already found at that time, the motion of Uranus did not
conform to the Newtons law of gravity and motion. Newtonian theory was
considered as an unchallenged law at that time, so introducing a new condition C
seemed to be a good choice to explain E. Thus, Leverrier conjectured that the motion
irregularities of Uranus should result from the gravitational pull of an undetected
outer planet. He calculated the desired parameters of this potential planet, such as
30
position, mass and orbit. This prediction was strikingly confirmed by the discovery of
Neptune (Hempel, 1966)!
The second case:
L: Newtonian theory
C (known): Detected celestial bodies
C (to be tested): An undetected planet
E: Explanation of the motion of Mercury
In this case, with the already detected celestial bodies, the motion of Mercury did not
conform to the Newtonian theory. Leverrier again conjectured that there should be a
very dense and small object between the sun and Mercury. But no such planet could
be found. Much later, with Einsteins general theory of relativity, E was successfully
explained by L and C (Hempel, 1966).
31
Section 1.3 listed six specific objectives of this research in achieving the general
purpose to make an original contribution to the knowledge of payment problems in
the construction industry. This can be approached through three sets of questions: 1)
what are the payment problems and measures to address them; 2) why problems arise
and how these measures are expected to work; 3) within the context of Mainland
China construction industry, what are the factors and their relative ranking
contributing to payment problems.
The first difficulty in doing this research is the lack of uniform terminology (jargon)
to compare payment problems in different jurisdictions. Different expressions, such as
payment problems, financial difficulties, payment arrears, etc are used across
jurisdictions. The first task was to define a common scope and conceptual framework
so the international data can be comparable. The concept and scope of payment
arrears was chosen because 1) this is the core problem in Mainland China and also
32
exists in other jurisdictions, 2) hard data of legislation addressing this problem can be
solicited and compared across different jurisdictions, and 3) official hard data is
available in Mainland China.
The second difficulty is the data itself. Data of measures addressing payment arrears
can be obtained through the review of regulations in different jurisdictions; but
detailed data reporting the problem is not widely available internationally. An
international review indicated that the problem of payment arrears does exist globally.
Mainland China was chosen as the context to provide detailed data. Interviews and a
questionnaire survey were also conducted to solicit opinions about payment arrears
from practitioners in the Mainland China construction industry. Given the different
occupational positions and conflicting interests, one difficulty was to obtain honest
responses from clients, or even consultants, in such a topic that may shed lights on
their own bad practices. In interviews in Mainland China, most clients denied that
they have committed to payment arrears and thus felt there is no need to interview
them. In the questionnaire survey, only a small number of clients and consultants gave
feedback. So the opinions solicited were mainly from contractors and sub-contractors,
which may be skewed and exaggerate the problem, over-blame clients and exculpate
sellers themselves.
The review of problems and measures in international settings and Mainland China
provided hard data that must be properly understood. Since different groups of
stakeholders have different, even conflicting interests, using hard data to indicate the
existence of problems is neat and reliable. Theoretical models were then presented
under gametric reasoning, with basic motivational assumptions and educated guess
to develop explanations. They are applications of general laws and theories in the
domain of construction management, in the specific field of payment arrears in the
construction industry. It plays the role of how and why in the theoretical contribution.
33
Some sub-questions and sub-hypothesis were developed from the models, and
answered or tested through interviews and questionnaires. This part plays a dual role:
to support the suggested models through soft-data triangulation, and to derive some
original findings through a deductive approach under specific conditions. The soft
data was used not to construct a theory, but to support (or falsify) hard data based
models which conform to accepted knowledge. Furthermore, the findings were not
only derived from induction of soft data, but also with support from theoretical
models. The research design map is visualized in Figure 2-4.
It should be noted that although the theoretical models can be applied in other
jurisdictions, specific findings of factors and their rankings contributing to payment
arrears in the Mainland China construction industry depends on the particular context.
Domain-specific regulations, culture, credit system, socio-economic level, etc are
specific conditions that impact on this deductive-nomological reasoning.
34
35