Sunteți pe pagina 1din 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/281209982

Critical analysis: a vital element in healthcare research

Article  in  International Journal of Behavioural and Healthcare Research · August 2015


DOI: 10.1504/IJBHR.2015.071480

CITATION READS

1 333

1 author:

Charles Micallef

9 PUBLICATIONS   16 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Materials science project View project

Political science View project

All content following this page was uploaded by Charles Micallef on 20 February 2018.

The user has requested enhancement of the downloaded file.


104 Int. J. Behavioural and Healthcare Research, Vol. 5, Nos. 1/2, 2015

Critical analysis: a vital element in healthcare


research

Charles Micallef
Ministry of Health,
15, Merchants Street, Valletta, VLT 1171, Malta
and
Kunsill Malti għall-iSport (Malta Sports Council),
Spinelli Street, Gzira, GZR 1712, Malta
Email: carmel.micallef@gov.mt
Email: miccha@onvol.net
Abstract: Critical analysis questions literature quality through positive and
negative critique. The paper guides students and novice researchers on
objective critical thinking and writing for assignment excellence and
publication acceptance respectively, and helps clinicians evaluate healthcare
and behavioural literature thus taking better decisions for the patient. The
article touches the hierarchy of study designs and the critical appraisal
principles of causality, reliability, validity and execution including statistical
issues. Moreover, it looks at other aspects like title appropriateness,
standardised English writing style, data presentation, referencing quality and
extraneous factors such as competing interests. Objective measurements should
also be critically evaluated. Even a review paper of other peer-reviewed
reviews has to be critically evaluated. Creating debate between authors is
recommended. Triangulation and reflexivity are important for qualitative
research rigour. Issues of originality versus repeatability and ethical aspects
including risk assessment and sample size justification are appropriately
covered. Critical evaluation questions what the research has contributed to
society. An element of scepticism is essential for critical thinking. Critical
analysis should first be applied to one’s own work by going through a set of
ask-yourself-questions.
Keywords: behavioural research; critical analysis; critical appraisal; critical
evaluation; critical thinking; critical writing; critique; healthcare research.
Reference to this paper should be made as follows: Micallef, C. (2015)
‘Critical analysis: a vital element in healthcare research’, Int. J. Behavioural
and Healthcare Research, Vol. 5, Nos. 1/2, pp.104–123.
Biographical notes: Charles Micallef graduated in Pharmacy in 1991 from the
University of Malta. He specialised in physical activity and public health at
Staffordshire University. Prior to enrolling for the Masters, he was the lead
author of ‘Assessing the capabilities of 11-year-olds for three types of basic
physical activities’. Within a year after presenting his dissertation on Zumba
exercise for weight loss in 2013, he published as a sole author: ‘The
effectiveness of an eight-week Zumba program for weight reduction in a group
of Maltese overweight and obese women’, ‘Associations of weight loss in
relation to age and body mass index in a group of Maltese overweight and
obese women during an eight-week Zumba programme’, and ‘Community
development as a possible approach for the management of diabetes mellitus
focusing on physical activity lifestyle changes: a model proposed for Maltese
people with diabetes.’ The author voluntarily supervises students’ dissertations
and reviews papers for other journals.

Copyright © 2015 Inderscience Enterprises Ltd.


Critical analysis: a vital element in healthcare research 105

1 Introduction

The paper discusses the assessment of healthcare literature quality through critical
analysis or critique. This entails more than simply negative judgement. High standard
academic critical writing is when one uses reasons and evidence to perform a fair and
sometimes detailed assessment in order to support his/her standpoint. Therefore, critical
analysis is more than just stating the strengths (merits) and weaknesses (limitations) of
your study findings; it also needs to be applied to the literature review in the introduction
or literature review section of your paper or thesis. In fact, the word, ‘review’ means, a
critical appraisal of any piece of work. In other words, literature review and research in
general, are expected to be more than merely descriptions of other researchers’ findings.
When one shows that he/she is able to think critically and objectively about an issue
and to present a well-constructed argument to support a point of view, the possibility of
having his/her work accepted for publication on a scientific journal is high, even if there
are no positive findings to report. A famous quote by Albert Szent-Györgyi (1893–1986)
reads, “research is to see what everybody else has seen and to think what nobody else has
thought”. Furthermore, the award of higher grades in assignments, including theses,
could also demand the application of criticism in both positive and negative forms.
However, the main scope of this article is more than merely providing guidance to
healthcare students, novice researchers, clinicians and other healthcare professionals in
obtaining high grades in assignments or in having papers accepted for publication. A
wise, busy clinician searching for evidence-based healthcare solutions for his/her patients
may go straight for systematic reviews or for reports by legitimate, international
organisations that have already evaluated and summarised the relative studies, but what if
such evidence is lacking? This article should then prove useful in helping clinicians
arrive to better conclusions for their patients when only one or few primary studies
(original articles) are available.
A discredited study may however, appear with no identification of its weaknesses and
may mislead even informed critics. It is also important to know that once papers enter the
electronic literature, there they remain and there is no way they can be retracted if found
to be misleading or suffering from serious flaws that have not been highlighted by the
authors. So, the busy practitioner needs to be informed.
Even press statements issued by organisations should not automatically be taken as
authoritative. One should find time to read the full report because an international body
may base its conclusions on any available evidences fed into it by the participating
countries.
Apart from the usual critical appraisal issues found in most textbooks, the paper
attempts to help the reader consider other features of academic writing and reviewing that
are normally not taken into account as being subject to critical evaluation. These could
also lead to adverse effects on preventive and curative healthcare if they are not evaluated
with a critical eye. For example, a busy clinician with poor knowledge of critical analysis
may not have time to read a 5,000-word article, and if the abstract does not adequately
and clearly summarise the findings, he/she may be tempted to base a decision on a catchy
title if unaware that titles can be misleading. Extraneous factors such as competing
interests are among the important aspects to be considered before accepting any research
proposal and findings. In addition to students, novice researchers and healthcare
professionals, overall this paper should interest academic supervisors and examiners from
106 C. Micallef

various faculties (particularly those related to healthcare and behavioural sciences), ethics
committee and dissertation board members, and journal editors and reviewers.
The way this paper is structured and presented is somewhat unique because it tries to
touch every possible aspect of critical analysis even though in any particular case you
would not be utilising its full potential. Although a logical pattern was used with one
subheading and corresponding section leading to the next whenever possible, each
section can be read and understood independently from the other. Some section
overlapping was unavoidable.

2 Common instances of critique applications, starting with the title

The introduction of any essay, dissertation or paper should be evaluative and critical of
the studies which have a particular bearing on your own assignment or research (Stewart
and Sampson, 2012). For example, you may think that the authors failed to identify some
limitations due to certain threats to the study’s internal validity. It may also be possible to
critically comment upon the suitability of the study design, the adequacy of the sample
size, the data collection process and so on.
Critical analysis actually starts with the title of the paper. Was the title a good
description for what was implemented or simply a sensational title to catch the readers’
attention as in newspaper headlines? The title should adequately capture the variables and
population under investigation (Polit and Hungler, 1998). For example, if a study on the
evaluation of a particular weight loss program on a selected, small group of obese
participants had to bear the following title, ‘the effectiveness of a ten-week dietary and
exercise intervention in reducing excess body weight in Maltese obese women’, it could
lead to a labelling issue because such a misleading title would give the impression that it
was a national (large-scale) program or that the sample was representative of the target
population.
A title could also misguide the reader into thinking that there was a degree of
causality and that the findings were consistent (replicated several times) as for example,
‘vitamin X protects against breast cancer.’ If evidence to support causation was poor such
as when an association is identified through correlational research and the study had not
been previously performed, a more appropriate title could be, ‘study shows that vitamin
X is linked with breast cancer prevention’ or ‘relationship between vitamin X and breast
cancer among ….’ The difficulty of interpreting such findings stems from the fact that in
the real world, behaviours, states and characteristics are interrelated in complex ways. If
cause and effect is however suspected, one should apply the Bradford Hill’s criteria for
determining causation (University of South Alabama, n.d.).
The next thing an examiner or reviewer probably looks at is the standard of scientific
English used; whether it complies with the specified writing style. Grammatically
incorrect English, especially if it also lacks a coordinated flow of text, could give a
feeling that the paper is not going in any particular direction. Using vague (imprecise)
statements like, “until the last quarter of 2013, most of our patients received heparin” is
also not recommended in scientific writing.
Critical analysis: a vital element in healthcare research 107

3 Appraising the evidence helps put the right findings into practice

Researchers should be very cautious in the interpretation of their findings or the results of
other authors. Probability terms like, ‘it is likely’ or ‘unlikely’, and other tentative terms
should be used when appropriate. Jumping into premature conclusions can have serious
repercussions on healthcare.
To illustrate what it means to put findings into practice let us consider a study on
Ebola transmission. What seems to be reported by some authorities as positive findings
resulting from studies on non-human primates, whereby the virus, under the specified
experimental conditions, was found to be non-transmissible via an airborne route
(Alimonti et al., 2014), should still not be translated as directly applying to humans. Even
if the scientific community manages to find healthy volunteers for experimental research
on Ebola transmission (which is most unlikely!), it could take legitimate organisations
quite some time in order to evaluate the replicated and constant results of several studies
on human subjects before issuing any public statements that Ebola cannot be transmitted
in humans through coughing and sneezing. Still, one can question why the subjects were
not studied in real life circumstances. This does not necessarily invalidate the studies
themselves but may cast doubt on the applicability of the research findings into practice.
Knowledge of critical analysis therefore helps researchers thoroughly evaluate the
available literature and their own works in the best ways possible. This implies that
eventually they should be able to implement the right findings in a relatively safe
patient-centred approach. As a general rule, especially when facing any healthcare threat,
it is critical to make conclusions and take decisions based on facts.

4 Critical writing as a skill

There is no need to feel hesitant about criticising published work. Of course, a negative
critical evaluation should ideally be balanced with a positive one. Therefore, do not
refrain from also stating the study’s strengths.
Critical writing is a skill that does not come automatically when writing your doctoral
thesis. One has to start practising it in preferably all assignments at masters’ level and to
some extent at undergraduate level as well. Whether you are faced with an original
research article (primary study) or a review article (secondary study), follow this simple
advice when trying to evaluate it. First imagine that the paper was written by your
adversary who is competing with you for the same post. What would you do? You would
probably make sure that no irregularities in his/her paper remain unnoticed. However, it
is still important to maintain sensitivity when handling negative comments. Tentative
(cautious) language is an important feature of academic writing. For example, it is more
appropriate to write: “as …, there appears to be an error in this statement”, instead of, “as
…, this statement is not true.”
Then switch off to being dependent on this author for a promotion or job
qualification. You would now probably make every effort to highlight each and every
positive aspect of his/her paper and give praise accordingly. The accent here is on the
word ‘accordingly’. Make sure you do not exaggerate; too much positive criticism with
lengthy sweet phrases will also spoil your work. Just point out the strengths without
unnecessary adjectives and justify why you consider them as strengths. For example, “as
108 C. Micallef

self-reported data generally underestimated the prevalence of obesity (World Health


Organization, 2007), weight measurements were recorded objectively by the researcher.”
When it comes to appraising your own work in hope of identifying all its weaknesses
and other areas that are subject to further improvements, there is no better way to learn
how to revise it than to perform critical analysis on other authors’ works. You will find
that your critical eye works much better when it is focused on their works than it does
when it is focused on your assignment or manuscript. You can be more objective when
looking at someone else’s work and you can see more easily what has gone wrong in
their papers and how you could improve their reports. When you practise these skills on
someone else’s paper, you become more proficient at practicing them on your work
(Institute for Writing and Rhetoric, 2014).
An assignment which is too descriptive would probably be very boring to read.
Adding quality critique in your introduction or literature review and discussion sections
spices your work and makes the reader/reviewer/examiner want to continue reading your
paper.
A common mistake by most students is when making grandiose claims to support
their conclusions such as when they do not duly consider the threats to their research’s
internal validity. Another example is when a correlational finding (association) is
confused with causation. The generalisability of conclusions (external validity) offers
another opportunity for critical appraisal. These are just a few aspects of critical appraisal
of the literature. Although they are all covered in standard textbooks (Crombie, 1996;
Gosall and Gosall, 2012; Greenhalgh, 2014; Straus et al., 2011), journal articles
(Greenhalgh, 1997a, 1997b; Greenhalgh and Taylor, 1997) and web-resources (Cardiff
University, 2013; McMaster University, 2008; University of South Australia, 2014), the
ability to use these tools is not always straightforward and does not come overnight – it
needs practising. This is more felt when no specific set of standard evaluation questions
are available as will be seen in the next section.

5 Even a review of reviews deserves critique

Let us consider the open-access paper by Ding and Gebel (2012), ‘Built environment,
physical activity, and obesity: what have we learned from reviewing the literature?’ Take
note of the overall structure of the paper and how they clearly explained their search
techniques plus the beautiful presentation of the tables. Observe how the authors
conducted the critical analysis and gave their recommendations. They reported the
weaknesses of other review articles and supported their standpoint as follows: “… few
reviews assessed the methodological quality of the primary studies, and some did not
report critical information, such as data sources, the time frame for the literature search,
or the total number of studies included. Future reviews should adopt a more systematic
review methodology to assist in the synthesis of the evidence.”
As the authors assessed review papers, at first glance one would expect them to use a
validated critical appraisal tool such as the Critical Appraisal Skills Programme (CASP)
(Stewart, 2010). Rightly so, as they were only interested in assessing how these review
articles delivered the relationships of the built environment with respect to physical
activity and obesity aspects (and not in the quality of the reviews per se), they had to
devise eight specific evaluation questions.
Critical analysis: a vital element in healthcare research 109

One would expect that being a double-blind peer-reviewed review paper of other
peer-reviewed review articles, this paper should be flawless. Not surprisingly, it has some
imperfections. The authors had a habit of using the personal term ‘we’ such as, “we
searched the literature for peer-reviewed review articles that were published in English
from January 1990 till July 2011.” A more appropriate scientific way of writing this
statement would be: “peer-reviewed review articles that were published in English
between January 1990 and July 2011 were searched.” The article also lacks a short,
general conclusion.

6 Create debate between authors

At times, even critique after critique could be dull to read. This could be overcome by
discreetly creating debate between authors of contradicting findings or opposing
opinions, though you would still need to take a position to support your argument. For
instance, the author of a particular study is, in your opinion, making over-reaching claims
because he/she did not investigate some necessary aspect of the study. Then through
further literature searching you find a paper that supports your thinking. Therefore, your
write-up may look something like this: “whereas author X (2011) was claiming that B
was the outcome of A, as argued by author Y (2013), it is still early to conclude that A
was causing B because the study did not have an appropriate control group.”
Ding and Gebel (2012) identified that review studies had to be more specific in the
reporting of their findings: “… almost half of the reviews either combined adults with
youth or did not specify target age groups.” However, the authors wanted to support their
beliefs by quoting other authors who had previously arrived to this conclusion; this
automatically engaged other researchers in the debate: “… to avoid misleading
conclusions, reviews should focus on one age group, or stratify studies by age (Ding
et al., 2011; Wong et al., 2011).”

7 Evidence-based healthcare: challenging study design rankings

No account on critical analysis is complete without a touch of evidence-based healthcare


which is widely accepted as the ideal practice for patients in order to receive the best
clinical management or intervention. With experience you would also learn how to
challenge what apparently looks as scientifically ideal.
Some epidemiologists and experimental researchers have a tendency to take the
hierarchy of study types and designs as bible. Overviews or secondary studies (systematic
reviews and meta-analyses) followed by randomised controlled trials (RCTs) have
traditionally been regarded as the best quality study types to assess evidence of
effectiveness.
However, the ranking system of study types does not always apply smoothly in
practice. As Stewart (2010) pointed out, a well-designed cohort study may provide better
evidence than a badly conducted RCT. Every-Palmer and Howick (2014) explained how
a number of industry-funded randomised trials in pharmaceutical research have been
corrupted by vested interests involved in the choice of hypothesis tested, in the
manipulation of study design and in the selective reporting of such trials. The authors
110 C. Micallef

suggested that evidence-ranking schemes need to be modified to take industry bias into
account.
When comparing two drugs, it could be that one drug had more therapeutic effects
than the other not because it was pharmacologically more potent but due to formulation
properties that adversely affected the distribution (dispersion) process of the seemingly
inferior drug. Therefore, a basic understanding of pharmacokinetics is also necessary
when reviewing medical literature. Researchers could use a comparator drug product with
formulation problems in order to favour the drug under investigation.
Many researchers opt for prevalence studies. Let us see what Ding and Gebel (2012)
had to say: “… evidence has come from cross-sectional studies, which cannot provide
strong support for causality.” … “More longitudinal studies are encouraged because they
account for temporal order.”
Uncontrolled experimental studies, case reports and case series are generally faster
and more convenient to perform than prevalence studies. They usually have in common a
before-after or repeated measures approach with no controls and as expected, rank poorly
in the hierarchy. There are some journals in which the authors’ guidelines specifically
state that studies with no controls would be rejected upon submission.
Controls are especially important when rigorously testing the effectiveness of drugs,
vaccines and other interventions. However, there can be circumstances when
uncontrolled approaches are justified as with the pre-test post-test single group study
regarding the effectiveness of a Zumba program on body composition (Micallef, 2014a).
In such behavioural science research involving subjects under free-living conditions,
blinding and placebo controls cannot be performed and it is practically impossible to
isolate program and control groups from each other in order to prevent social interaction
threats, such as compensatory rivalry and resentful demoralisation, from occurring.
Moreover, the researcher could never know what the subjects were doing in their private
lives.
Case reports and case series too should not be negatively labelled as long as they are
performed for the preliminary evaluation of novel therapies or when a controlled trial is
neither logistically feasible nor ethically justifiable.
Professional reviewers often use templates with validated checklists when critically
appraising studies. Each appraisal tool is usually specific to a particular study type and
although the questions vary for different studies, each reviewer normally has two main
questions to investigate:
• What did the authors actually find?
• Should their findings be trusted?
Reference was earlier made to the CASP which contains questions for assessing reviews
(see ‘even a review of reviews deserves critique’). Some scales have also been developed
to gauge the quality of studies such as the Jadad Scale which is used to assess the quality
of RCTs (Royal Australasian College of Surgeons, n.d.).

8 Targets for assessing evidence of effectiveness

When assessing the strength of evidence of effectiveness, critical appraisal should target
mostly the quality of study design and execution. Reliability, which is the extent to which
Critical analysis: a vital element in healthcare research 111

the study results can be replicated to obtain a constant result, and validity, are important
factors to be considered when assessing the design quality (Stewart, 2010). We have
already seen that a badly conducted (poor quality) RCT can become listed below a cohort
study.
Validity, when applied to research tools, refers to how accurately they actually
measure what they are required to measure. If a questionnaire is expected to explore the
pharmacists’ views in dispensing antibiotics without prescriptions and only consists of
questions that examine their knowledge of pharmacology on antibiotics, it would not be
the right tool for the research’s aim. Internal validity is relevant when the resultant
differences are due only to the hypothesised effect whereas external validity refers to how
generalisable the results are to their target population (Stewart, 2010). The reader is
advised to acquaint him/herself well about the various threats to validity, types of bias
and confounding that could occur in research.
The study execution refers to factors related to the actual outcome measurements
including adequate frequency and duration of the intervention, instrumentation, data
analysis and interpretation of the results. Actual as can be, even objective measurements
of energy balance for example, need to be critically assessed. Although 37 obesity
researchers and experts reported that they should be used in preference to self-reported
measurements which rightly so, they criticised as too inaccurate (Dhurandhar et al.,
2015), it could be that instruments for energy cost studies such as heart rate telemetry and
open circuit spirometry would encumber the subjects’ physical activity movements, thus
lowering their energy expenditures (Micallef, 2014a).
One should also be careful not to condemn a new intervention which lacks sufficient
evidence of effectiveness. As Crawford et al. (2002) noted, the absence of evidence
should not be mistaken for the absence of effect.
A single study could represent reasonable evidence, but its strength of evidence
remains limited. A large number of studies constitute a stronger body of evidence
because several replications reduce the likelihood that the results of individual studies
could be caused by chance or due to bias.

9 Statistical issues including data presentation

A good understanding of statistical terms such as, probability or significance value (P),
confidence interval (CI), effect size (ES), t-test, analysis of variance (ANOVA), analysis
of covariance (ANCOVA), multiple linear regression, and epidemiological associations
like relative risk (RR), absolute risk (AR) and odds ratio (OR), is essential when
evaluating the study’s execution. Types 1 and 2 errors should be recognised when
interpreting the P. The latter depends on the CI around the measure. Qualitative research
involving categorical variables often include chi-squared (χ2) and logistic regression.
A critical evaluation of statistical findings may question the level of significance
chosen and why CI and ES have been excluded. Stewart (2010) advised that for critical
studies, such as treatment trials, statistical significance is best set at P < 0.01 instead of
< 0.05. However, it is important to know that statistically significant results are not
necessarily clinically (practically) significant. A reduction in the mean diastolic blood
pressure of a group of adults, from 110 to 100 mmHg, may have a P-value of < 0.0005
but would still be above the healthy level. Another way of testing hypotheses is through
112 C. Micallef

the CI. Moreover, when it comes to judging clinical significance for an intervention,
Sturmberg and Topolski (2014) recommended the calculation of the ES.
Sturmberg and Topolski (2014) also warned against the misrepresentation of findings
through statistics. Manipulation of the denominator could help the overselling of
seemingly superior therapeutic products. The example brought by the authors involved a
scenario when the reporting of the percentage of people dying from a condition initially
appeared very high (one person out of four) because the denominator included only four
people affected with the condition. However, after identifying eight more people with the
condition (now 12 in all) but still only one dying, suggested that the mortality rate went
down from 25% to 8.3% when in reality nothing had been gained by identifying more
affected people because the mortality in the whole community was constant.
Among other statistical fallacies that Sturmberg and Topolski (2014) continued to
highlight were two that cannot be ignored. Firstly, the randomisation process
underpinning the RCT aims to stratify subjects by a set of pre-defined characteristics and
assumes that people are predictable mechanistic entities when in reality the human
body behaves in complex adaptive ways. Thus, its behaviour to challenges is
non-deterministic. Then, there was the issue of relative versus absolute statistics.
Researchers may present results that are most likely to impress by reporting a reduction
in RR rather than using the true or AR. Differences between the intervention and control
arms of a study can be magnified if the relative difference between the two groups, rather
than the more meaningful absolute difference, is reported.
The student and novice researcher are inclined to leave the statistical calculations in
the hands of a statistician and then end up being obliged to include him/her as part of the
study’s authorship during the publication process. I could see statisticians featuring as
co-authors in a good range of studies that cover completely different topics. To openly
involve a statistician as part of a research team is acceptable but to claim that your
dissertation is all your own work is cheating.
A student with poor knowledge of statistics is easily noticed in as early as the
literature review (that is, before arriving at the data analysis stage) when in the process of
trying to critically evaluate the data gathered from various studies he/she simply states
whether sample sizes were sufficiently large or not and just quotes the statistical results
as presented in the literature without at least harmonising all the data into a standardised
manner for the sake of presenting clear comparisons. To judge the sample size and
simply give all drug therapeutic levels from various studies using one normally applied
unit of concentration and present them in a table form without going into further
statistical evaluation is the least thing one could do and yet they may even fail this
elementary task!
Finally, this section would be incomplete without a brief discussion on how data can
be presented. Tables and graphs accompanied by appropriate and concise text (captions),
and numbered accordingly, should serve as visual aids for the reader to rapidly
understand what you are trying to convey in the text. They also pose a problem in being
entities that easily catch the critical eye of the reader. Lengthy and complicated tables are
not recommended and ideally should be split into smaller ones whereas full tabulated
data may be included in appendices (Stewart and Sampson, 2012).
Figures usually take the form of various graphs like histograms, scatter-plots and pie
charts. Three-dimensional and other special effects can detract from easy and accurate
understanding (Stewart, 2010). Colour should only be used if essential. Healthcare
Critical analysis: a vital element in healthcare research 113

research papers, especially case reports and case series, also use photographs as figures.
In any case, high resolution figures should be produced.

10 Qualitative research and mixed methods approach

Although several researchers and doctors have traditionally been reluctant to go beyond
quantitative methods involving statistical figures, a good qualitative study could still
address a clinical problem. This could be done by: using a clearly formulated question,
using more than one research method (triangulation), and independently coding and
analysing the data by more than one researcher as a ‘quality control’ to confirm that they
both assigned the same interpretations (Greenhalgh and Taylor, 1997). For example, “the
focus groups’ data of people with diabetes was crosschecked with the information
gathered through the postal survey and the hospital records.” Quasi-statistical procedures
should also be used to validate the findings (Polit and Hungler, 1998).
Respondent validation or member checking secures construct validity. For example,
to ensure that the researcher fully understood the views and perceptions that emerged
following a focus group theme discussion, a verification process was applied whereby the
accuracy of the findings was checked with the participants.
The context of the phenomenon under investigation should be adequately described.
Furthermore, the report should give the reader a clear picture of the social world of the
people under study (Polit and Hungler, 1998).
As a general rule, the interpretative researcher should apply reflexivity so that readers
could make greater sense of the analysis presented. Maintaining a sceptical approach to
the evidence acquired is important. For example, were you told what you wanted to hear
(Carter and Henderson, 2005)? The unconscious nodding by the researcher and the
regular ‘yeah’ and ‘right’ replies could be interpreted as agreeing with the subject and
thus act as an element of bias or false information (Gratton and Jones, 2010). According
to Willig (2013), personal reflexivity is when the researcher reflects upon ways in which
his/her own values, experiences, interests and beliefs could have shaped the research and
how the research may have affected him/herself whereas in epistemological reflexivity
the researcher should think about the implications of the assumptions by engaging with
questions such as:
• How has the research question defined and limited what can be found?
• How could the research have been investigated differently?
Further food for thought: what if the researcher acting as an ‘observation instrument’ got
better by time at making the observations, resulting in instrumentation threat? Was the
empirical evidence obtained through observations verified with other observers? This is
important because there is a tendency to believe what you see. Whether the study
deliberately recruited a reasonable number of individuals to truly fit the bill, and whether
data collection continued until saturation occurred (that is, when new information was not
supposed to provide further insight), plus whether it was analysed through a systematic
process (for example, through content analysis), are other factors to be considered.
Furthermore, if grounded theory was used for the construction of a new theory, was it
used appropriately?
114 C. Micallef

In either case, whether it is content analysis or grounded theory, Merriam (2009)


emphasised that qualitative data collection and analysis should be undertaken
concurrently. Otherwise, it is not only overwhelming (imagine that all data collection is
done and you are trying to deal with a pile of interview transcript papers and field notes
from your on-site observations plus a box-file full of relevant documents and literature),
but also jeopardises the potential for more rich data and valuable findings.
As with quantitative research, apart from replication, are the findings of qualitative
research transferable to other clinical settings? One of the commonest criticisms is when
they pertain only to the limited setting in which they were obtained (Greenhalgh and
Taylor, 1997).
There is also the issue of mixed methods research. Although quantitative methods
alone can be insufficient for the evaluation of certain interventions, mixed methods on the
other hand, may produce contradictory results. Nevertheless, pluralistic evaluation
normally accumulates evidence from a variety of different sources. In any case, whether
it is primarily quantitative research or qualitative research, the question of whether the
study could have been strengthened by mixed methods often crops up.

11 Construct validity

Normally, construct validity is applied to social sciences where subjectivity is involved,


but according to Trochim and Donnelly (2008), it is not limited to psychological
measures. They showed that construct validity is also part of the intervention. For
example, was the program a true weight loss program, or the results only reflected a
peculiar version of the program that was only held in a single place and at a particular
period?

12 Originality versus repeatability: issues of practicality and creativity

There is no doubt that a researcher who is creative in his/her work earns high respect.
There are journals that instruct authors to specifically include a subheading saying what
the study has added to the existing knowledge on the subject.
Although the strength of quantitative research lies in its reliability, admittedly,
nobody likes to read the same steps of previous researchers. However, there are instances
when repeatability also earns credit. For example, if a second study derived the same
results as the first study but conducted the research either after a relevant campaign, or
through a different methodology, then in both scenarios, there would be a degree of
originality: either by finding whether the campaign was effective or not, or by verifying
the original results through a different pathway. Strictly speaking, it would be wrong to
say that a study was replicated if it did not follow exactly the same footsteps of its
predecessor.
The issue of repeatability could be taken one step forward. A repeated study could be
scientifically sound (well designed with sufficiently robust methodology) and
Critical analysis: a vital element in healthcare research 115

academically justified (replicated preliminary data which was then no longer


inconclusive) but could lack national and even global interest. For example, a country
was experiencing an influx of immigrants suspected of carrying for the first time a
contagious disease for which no cure had existed. Preliminary screening tests revealed
that 20% were infected with a lethal pathogen. Further thorough clinical investigations
confirmed this figure. Both findings were published and acknowledged by the scientific
research community.
However, as the authorities were unprepared to deal with a sudden outbreak of such
magnitude, the affected country was in dire need for effective solutions and public health
experts would have done a more useful job if they were to evaluate possible interventions
to control the disease than if they had to stay repeating practically the same prevalence
studies ad nauseam. Immaterial of whether the findings can be generalised to the whole
population or not, one could therefore criticise a study as having little or no practical
value to the host country especially if it was conducted in a state-owned university or was
financed by the nation or other sources, where the funds could have been used for more
fruitful research that would benefit society. On the other hand, whereas researchers
should avoid unnecessary replications they should also not leap several steps ahead when
there is an insecure foundation (Polit and Hungler, 1998).
Even when it comes to research originality, a novel model created for the sake of just
being different from conventional therapy without having value (importance or
usefulness) does not pertain to creativity (DeBono, 2006). An unusual model for people
with diabetes, based on community development, can be put into practice (Micallef,
2014b) and therefore earns credit for its creativity, but someone who comes up with a
proposal of having triangular room doors instead of rectangular ones would not be
accredited for creative thinking unless it can be shown to possess value.

13 Was the right target population selected?

A study was conducted to gather as much value data as possible on the symptoms (if any)
of cervical carcinoma. Immaterial of whether the researchers were looking for survey
respondents or clinical subjects, it would be of little scientific or medical value to select
all age groups of women, apart from also being considered as unethical conduct. The
population of interest should be women between 25–69 years who are at risk of this type
of cancer (Bonita et al., 2006).
Researchers know that university students could be relatively easy subjects for
research in the sense that they would mostly comply with the research instructions and
are unlikely to drop-off from the program. Hence, it is common to see studies with the
recruitment criteria for young and apparently healthy volunteers. Such studies may only
provide poor advancement for clinical treatments.
Unless reasonably justified, gender inequalities could be another means of accusation.
A national study on sexual health aspects encountered by adults could be incomplete if it
only randomly selected clear-cut genders (males and females) without employing
stratified sampling for gender-variant people (trans-genders). On the other hand, a study
on reaction time in a representative sample of schoolboys had excluded girls from
116 C. Micallef

participating. However, in the limitations’ section, the authors admitted that a single sex
study had to be conducted due to the religious culture of the country and so it was fine
from that sense although obviously one cannot infer their results as applying to all
schoolchildren.

14 Ethical issues, risk assessments and sample size justification

When assessing for any breach of ethical standards, you should first see whether that
particular study was ethically approved at both institutional and national levels according
to the Helsinki Declaration of 1975, and subsequent revisions. Then, after thoroughly
reading the paper, without hesitation, express your moral views that should not be limited
to the usual written informed consent of the volunteers and the preservation of
confidentiality. For example, if any risks to subjects were predicted, such as cardiac
events during vigorous exercise in adults, certain control measures are expected to be
taken to address them. Such precautions could include the presentation of medical
clearance certificates and age capping recruitment measures. A risk assessment resource
with tables such as the one provided by Staffordshire University (1998) can be of useful
guidance here.
In the previous section, we saw that researchers who tend, without justifications, to
grab hold of whatever category of human resources they can easily lay their hands upon
for their research, or who intentionally leave out specific subgroups, could be seen as
performing scientifically and morally wrong procedures.
It is also unethical to undertake a study with unnecessarily large sample of subjects; it
could be a waste of time, money and human resources. On the other hand, whereas large
differences can be detected in small samples, small differences can only be identified in
large samples (Sturmberg and Topolski, 2014). Authors should justify the sample size
that allowed them to gain reliable insights, through a priori calculations whenever
possible (Stewart, 2010). They should also take into account any expected undesirable
outcomes such as dropout and response rates.
The randomisation into intervention and control groups of patients suffering from
serious illnesses is also subject to ethical controversy such as when treatment to control
subjects is deprived or when one group receives a treatment that has already shown to be
inferior to the treatment of the other group. One method used by some pharmaceutical
companies to get the results they want from clinical trials is to compare drugs under study
with treatments known to be inferior (Smith, 2005). The vested interests in industry-
funded trials have been discussed by Every-Palmer and Howick (2014) in
‘evidence-based healthcare: challenging study design rankings’.
A word of advice could here be useful. Although both are subject to scrutiny, do not
mix randomisation or random assignment with random selection or random sampling
(also called probability sampling) which aims to make a sample more representative of
the population (generalisability).
Animals too have rights. Some journals request authors who experimented with
animals to declare that they have respected EU Directive 2010/63/EU.
The issue of ethics can be further stretched to include critical evaluation of authorship
rights. A researcher may start getting numerous publications as ‘honorary author’ because
he/she is head of a branch or a respected scholar.
Critical analysis: a vital element in healthcare research 117

15 Quality and quantity of references

The reference list should, if possible, also be analysed. A primary study is generally
expected to contain around 15 to 30 references whereas a secondary study can be allowed
to have more than 50 references. Check whether the references were appropriate with the
text. Were current references used? Distinguish between secondary (high value) research
such as systematic reviews and secondary (non-recommended) references when authors
rely on somebody else’s version of a given study.
An organisation funding a research may hide a study with negative outcomes and
some journals also prefer to publish studies which demonstrate positive findings. Stewart
(2010) described these potentially dangerous practices as publication bias. Under
‘evidence-based healthcare: challenging study design rankings’, selective publication was
shown to have thwarted the potential of evidence-based medicine for improving
healthcare. It is for these reasons that dissertations and other unpublished works should
not be ignored during literature searching as ‘grey literature’ could still be useful.
Comprehensive searching should ideally also cover studies in languages other than
English.
Personal communications usually have little bearing. They should only be mentioned
in the text.

16 Critique your own study

This article started with the need to acknowledge the strengths and weaknesses of your
study findings. It is imperative to critique your own study before you openly criticise
other people’s work. It is therefore advisable to criticise without concealment your own
study rather than allow others to do it for you, especially if the reader is your examiner or
a reviewer deciding whether to accept your article for publication or not. When you
highlight and discuss your own limitations and avoid jumping into grandiose conclusions,
you are not being naïve; on the contrary you are showing that you are a mature, honest
researcher and deserve due recognition of your hard work even if you have no positive
results to reveal.

17 Extraneous factors

One should also try to analyse factors that are unrelated to the study per se. For example,
was there any conflict of interest? Drug companies often sponsor researchers to evaluate
their medicinal products and ruling out competing interests could therefore be hard. It has
been briefly shown that a number of industry-funded studies can be associated with
certain flaws (see ‘evidence-based healthcare: challenging study design rankings’). As
Every-Palmer and Howick (2014) suggested, more investment in independent research is
required. In addition to financial gain, the welfare of the patients or the validity of a
research may be influenced by other secondary interests such as personal rivalry.
Try to delve into the journal’s history, editorial board and the instructions for authors.
How long has the journal been established and what is its acceptance rate (if available)?
118 C. Micallef

Some journals boast to have a rejection rate of 90%. If metrics like impact factor and h-
index are available, take note of them.
Was the article or book peer-reviewed through a double-blind reviewing process?
Apart from journal articles, the paper could be a chapter in an edited book or it could be a
whole dissertation published in the form of a textbook. For blind reviewing, any form of
personal identification including acknowledgements and conflicts of interest that could
somehow affect the reviewers’ judgements should be submitted separately and not in the
same file containing the manuscript. Furthermore, the two reviewers have to be chosen
independently so as not to influence each other. The term ‘double-blind’ is also used in
connection with RCTs when neither the subjects nor those who administer the treatment
know who is in the experimental or control group.
If it is an open-access article (which carried a publication fee), could it be that the
lead or corresponding author was asked to decide on whether to publish the paper via
open-access method or through restricted procedure before the editor-in-chief evaluated
its suitability for the journal? Ideally, authors should be able to decide on whether to go
for the open-access option or not only after acceptance for publication in order to ensure
the decision had no influence on the acceptance process.
Was the journal’s editorial office (especially the editor in chief and associate editors)
related to the main author’s academic institution? Do not let your evaluation skills get
influenced by the authors’ academic profiles and affiliations. Rightly so, several journals
do not publish the authors’ qualifications and in most papers the corresponding author
also has to quantify what each author has contributed to the study. We have seen in
‘ethical issues, risk assessments and sample size justification’ that some authors are being
added with other authors simply because they are important people.

18 The advantages of critique: a four-fold function

It could be that external validity was not an issue as when dealing with laboratory-based
experiments. Moreover, in view of not exceeding the word count, you would probably be
selective and focus only in analysing the threats to internal validity and the quality of
execution in your mini literature review (introduction) and discussion sections of your
paper. So, in practice you may only directly utilise a small fraction of critical appraisal.
This however, does not mean that you need not inform yourself about the full spectrum of
critical analysis.
The benefits of critiquing are four-fold. At student level, it gives you the power to
express your critical judgements over the works of other authors in your literature review
section or chapter. This judgemental power is also utilised if you are reviewing a paper
for journal publication, a research proposal, or an assignment (be it a small essay or a
dissertation). Secondly, it helps you acknowledge your own study limitations in the
discussion section or chapter before others highlight them for you. Then, when you
further master critical thinking it also automatically helps you perfectionise your whole
work by looking carefully at every detail as it is understood that you would not want
others to criticise it. Under ‘critical writing as a skill’ it was explained how the critical
eye becomes cultivated for excellence the more you practise critical analysis on other
authors’ works. Furthermore, when a researcher or a healthcare professional is confident
in evaluating the literature including his/her own work, the findings can be implemented
in the best possible way for the benefit of the patient.
Critical analysis: a vital element in healthcare research 119

19 Ask-yourself-questions

The following questions can help you aim for perfection in your work. These could assist
in improving your work (whenever possible), in finding and accepting your limitations, in
avoiding as much as possible that you receive negative criticism and in increasing the
chances for higher academic marks or for publication acceptance. They can also help in
reviewing or performing critical analysis on someone else’s work.
1 Is the title of the paper truly scientific and concise?
2 Does the abstract clearly summarise the main work and highlight the key findings to
encourage the reader to read the whole assignment or paper?
3 If it is a review paper, was it systematically performed by attempting to cover all
studies, published and unpublished, according to an established system that would
enable other persons to follow the same process and reach similar conclusions?
4 If you are dealing with a meta-analysis, does it systematically pool the results of two
or more clinical trials to obtain an overall answer to a specific question?
5 If the research is qualitative, did you follow accepted qualitative design and reporting
parameters?
6 Was critical analysis liberally applied to the literature review section or chapter?
7 Did the research question(s) and aim(s) arise naturally from the evidence presented
in the introduction or literature review?
8 Have you selected the most appropriate design?
9 Is the methodology sufficiently robust with adequate control measures as much as
possible?
10 Was the sample selected from the appropriate population and sufficiently large to
show any hypothesised changes plus in case of generalisability, was it randomly
selected?
11 Did you perform the right statistical test(s)?
12 Have you fully adhered to ethical standards?
13 Were all the study aims or objectives assessed?
14 Have you double-checked all the calculations including data in tables and graphs and
seen that readers can quickly grasp the important characteristics of the data?
15 Does the discussion reflect all the results including negative (undesirable) findings
and relate them to your own ideas and possibly, to the findings of other researchers?
16 Did you cover the strengths and weaknesses of your study and suggest what might be
done in more ideal settings?
17 Does the conclusion provide effective closure for the paper by indicating the possible
future implications of the study and by preferably leaving the reader satisfied that
everything was scientifically explained?
120 C. Micallef

18 Are there sufficient, current and quality references in the reference list?
19 Have you checked that the references match your in-text citations and that they all
conform to the latest edition of referencing style used or as specifically demanded by
the journal?
20 Overall, is the report written in an objective, unambiguous style with correct
grammar using tentative language, precise statements and logically well-presented
through a coordinated flow of information?

20 Conclusions and clarifications

It is hoped that the reader realised that any published literature is not infallible. As
Greenhalgh (1997a) admitted, some published papers cannot be used to inform practice
and should belong to the bin. We saw that ‘even a review of reviews deserves critique’
and under ‘targets for assessing evidence of effectiveness’, a report co-signed by several
experts (Dhurandhar et al., 2015) was not immune to critical analysis. Perhaps the flaws
related to industry-funded research are most likely to be remembered but as
Every-Palmer and Howick (2014) admitted, all humans have biases and it would be naïve
to think that publicly-funded research is free from bias. In spite of all this, the essay in no
way undermines the standard of most published papers.
The healthcare researcher should have at least one paper on a scientific or academic
journal. Needless to say, publishing a single-author paper and a multi-author paper as
lead author certainly adds more credit to your profile. Having an article accepted for
publication after being sieved through a rigorous critiquing process is probably more
prestigious than several descriptive and unchallenged works. Succinctness is essential for
academic writing and indeed presenting a paper that does not exceed the stipulated
word-count limit could already be a challenge in itself. Even the academic quality of
publications pertaining to conference proceedings is usually not as high as that of
peer-reviewed papers. Although a journal paper may not be perfect, the peer review
system ensures a degree of quality control.
Researchers should adopt a somewhat sceptical attitude to even their own studies and
this goes beyond the positivist’s approach of deductive reasoning during application of
the null hypotheses in quantitative research. We have seen that a sceptical approach is
also essential during the reflexivity stage of qualitative research. The researcher should
always be critical to his/her own research techniques and that of others in search for
scientific perfection and the truth. So, in today’s culture, do not take it so badly if they
label you as pessimist or associate you with Saint Thomas. You may still recall that there
is a tendency to believe what you see (see ‘qualitative research and mixed methods
approach’). For doubting Thomas, it was not enough to receive the news of the risen
Jesus from his trusted friends, the disciples, and to see with his own eyes, but he also
wanted to touch. Thomas’ finger can be regarded as a rudimentary scientific instrument
(Dixon, 2013).
Although it is undisputed that research inferences in evidence-based healthcare are
normally carried out through objective measurements, the use of instruments,
sophisticated as can be, is still prone to instrumentation threats and measurement errors.
We have seen that instruments such as heart rate monitors can also limit the subjects’
movements.
Critical analysis: a vital element in healthcare research 121

In this account, the author tried to convey important ways of carrying out critical
analysis for successful research with appropriate cautions when necessary. The examples
given were only used to illustrate the text and as there are practically unlimited
possibilities of critique, the reader is advised that this account is by no means an
exhaustive checklist for critical analysis.
Admittedly, the article is sometimes controversial in nature. This stems from the fact
that the author attempted to explore most of the spectrum of critical analysis that goes
beyond what is normally covered under the framework of critical appraisal. Discussing
critical analysis is in itself a hot topic because all humans have a tendency to err and
researchers are no exceptions. So, it is understood that certain parts of the article may not
at all be pleasing to the reader if they remind him/her of something! Polit and Hungler
(1998) remarked that an evaluation of whether the most appropriate data collection
procedure was used could involve a degree of subjectiveness. They added that issues
concerning the appropriateness of various research strategies can be topics about which
even experts disagree.
As can be seen, the author’s main objective was not to go into detail on what we
already know from standard textbooks on critical appraisal but to help the reader focus on
other aspects which are often not considered for critical analysis. However, the paper still
encourages the use of standard appraising techniques for evaluating the methodological
quality of literature. Therefore, overall it should prove to be a useful judgemental tool for
healthcare (and to a lesser extent, behavioural research) students, professionals,
researchers and academic staff in general.
Here are some further clarifications:
• Although the words ‘analysis’ and ‘appraisal’ can be used interchangeably, in most
of the text the term ‘critical analysis’ was used in preference to ‘critical appraisal’,
because unlike the latter, the former covers every aspect of a paper for its good
qualities and flaws – starting from the title and finishing off with the reference list.
• Being a commentary article does not mean that this paper was exempted from blind
peer-reviewing.
• As expected from an article of this type, its style is colloquial at times. Indeed, one
noticeable difference is that the use of personal terms was permitted.

Conflicts of interest
The author has no competing interests to declare.

Acknowledgements
I am indebted to my ex-tutors at Staffordshire University, namely, Prof. Antony Stewart
and Mrs. June Sampson. From day one of my masters’ course in physical activity and
public health, they had instilled into me the good and useful habit of applying critical
analysis in practically every assignment. Further acknowledgements go to the
Kunsill Malti għall-iSport (Malta Sports Council, KMS) and the Ministry of Health for
allowing me sufficient time to do the necessary research and preparation of this paper.
122 C. Micallef

The technical support of Mr. William Galea, a KMS Executive Officer, is also
appreciated.

References
Alimonti, J., Leung, A., Jones, S., Gren, J., Qiu, X., Fernando, L., Balcewich, B., Wong, G.,
Ströher, U., Grolla, A., Strong, J. and Kobinger, G. (2014) Evaluation of Transmission Risks
Associated with In Vivo Replication of Several High Containment Pathogens in a Biosafety
Level 4 Laboratory, Scientific Reports, Vol. 4, Article No. 5824.
Bonita, R., Beaglehole, R. and Kjellström, T. (2006) Basic Epidemiology, 2nd ed., World Health
Organization, Geneva.
Cardiff University (2013) Critical Appraisal of Healthcare Literature [online]
http://www.cf.ac.uk/insrv/resources/guides/inf083.pdf (accessed 5 July 2015).
Carter, S. and Henderson, L. (2005) ‘Approaches to qualitative data collection in social science’, in
Bowling, A. and Ebrahim, S. (Eds.): Handbook of Health Research Methods: Investigation,
Measurement and Analysis, pp.215–229, Open University Press, Berkshire.
Crawford, M.J., Rutter, D., Manley, C., Weaver, T., Bhui, K., Fulop, N. and Tyrer P. (2002)
‘Systematic review of involving patients in the planning and development of healthcare’,
British Medical Journal, Vol. 325, No. 7375, pp.1263–1265.
Crombie, I.K. (1996) The Pocket Guide to Critical Appraisal, BMJ Publishing Group, London.
DeBono, E. (2006) Expert on Creative Thinking [online]
https://www.youtube.com/watch?v=UjSjZOjNIJg (accessed 5 July 2015).
Dhurandhar, N.V., Schoeller, D., Brown, A.W., Heymsfield, S.B., Thomas, D., Sørensen, T.I.,
Speakman, J.R., Jeansonne, M., Allison, D.B. and Energy Balance Measurement Working
Group (2015) ‘Energy balance measurement: when something is not better than nothing’,
International Journal of Obesity, Vol. 39, No. 7, pp.1109–1113.
Ding, D. and Gebel, K. (2012) ‘Built environment, physical activity, and obesity: what have we
learned from reviewing the literature?’, Health and Place, Vol. 18, No. 1, pp.100–105.
Dixon, T. (2013) Doubting Thomas: A Patron Saint for Scientists? [online]
http://blog.oup.com/2013/05/doubting-thomas-dawkins-dixon/ (accessed 5 July 2015).
Every-Palmer, S. and Howick, J. (2014) ‘How evidence-based medicine is failing due to biased
trials and selective publication’, Journal of Evaluation in Clinical Practice, Vol. 20, No. 6,
pp.908–914.
Gosall, N.K. and Gosall, G.S. (2012) The Doctor’s Guide to Critical Appraisal, 3rd ed., Pastest
Ltd., Cheshire.
Gratton, C. and Jones, I. (2010) Research Methods for Sport Studies, 2nd ed., Routledge, Oxford.
Greenhalgh, T. (1997a) ‘How to read a paper: getting your bearings (deciding what the paper is
about)’, British Medical Journal, Vol. 315, No. 7102, pp.243–246.
Greenhalgh, T. (1997b) ‘How to read a paper: assessing the methodological quality of published
papers’, British Medical Journal, Vol. 315, No. 7103, pp.305–308.
Greenhalgh, T. (2014) How to Read a Paper: The Basics of Evidence-based Medicine, 5th ed.,
Wiley Blackwell, Chichester.
Greenhalgh, T. and Taylor, R. (1997) ‘How to read a paper: papers that go beyond numbers
(qualitative research)’, British Medical Journal, Vol. 315, No. 7110, pp.740–743.
Institute for Writing and Rhetoric (2014) Revision: Cultivating a Critical Eye [online]
https://writing-speech.dartmouth.edu/learning/materials/materials-first-year-writers/revision-
cultivating-critical-eye (accessed 5 July 2015).
McMaster University (2008) Critical Appraisal [online]
http://fhswedge.csu.mcmaster.ca/cepftp/qasite/CriticalAppraisal.html (accessed 5 July 2015).
Critical analysis: a vital element in healthcare research 123

Merriam, S.B. (2009) Qualitative Research: A Guide to Design and Implementation, Jossey-Bass,
San Francisco, CA.
Micallef, C. (2014a) ‘The effectiveness of an eight-week Zumba programme for weight reduction
in a group of Maltese overweight and obese women’, Sport Sciences for Health, Vol. 10, No.
3, pp.211–217.
Micallef, C. (2014b) ‘Community development as a possible approach for the management of
diabetes mellitus focusing on physical activity lifestyle changes: a model proposed for Maltese
people with diabetes’, International Journal of Community Development, Vol. 2, No. 2,
pp.30–40.
Polit, D.F. and Hungler, B.P. (1998) Nursing Research: Principles and Methods, 6th ed.,
Lippincott, Philadelphia, PA.
Royal Australasian College of Surgeons (n.d.) Jadad Score [online]
http://www.anzjsurg.com/view/0/JadadScore.html (accessed 5 July 2015).
Smith, R. (2005) Medical Journals are an Extension of the Marketing Arm
of Pharmaceutical Companies [online]
http://journals.plos.org/plosmedicine/article?id=10.1371%2Fjournal.pmed.0020138
(accessed 5 July 2015).
Staffordshire University (1998) Risk Assessments (General) Policy and Guidance [online]
http://www.staffs.ac.uk/images/risk_assess_policy_tcm68-15625.pdf (accessed 5 July 2015).
Stewart, A. (2010) Basic Statistics and Epidemiology: A Practical Guide, 3rd ed., Radcliffe
Publishing, Oxford.
Stewart, A. and Sampson, J. (2012) Dissertation Handbook, Staffordshire University,
Stoke-on-Trent.
Straus, S.E., Richardson, W.S., Glasziou, P. and Haynes, R.B. (2011) Evidence-based Medicine:
How to Practice and Teach It, 4th ed., Churchill Livingstone, London.
Sturmberg, J. and Topolski, S. (2014) ‘For every complex problem, there is an answer that is clear,
simple and wrong’, Journal of Evaluation in Clinical Practice, Vol. 20, No. 6, pp.1017–1025.
Trochim, W.M.K. and Donnelly, J.P. (2008) The Research Methods Knowledge Base, 3rd ed.,
Atomic Dog, Mason, OH.
University of South Alabama (n.d.) How do Epidemiologists Determine Causality? [online]
http://www.southalabama.edu/coe/bset/johnson/bonus/Ch11/Causality%20criteria.pdf
(accessed 5 July 2015).
University of South Australia (2014) Critical Appraisal Tools [online]
http://www.unisa.edu.au/research/sansom-institute-for-health-research/research-at-the-
sansom/research-concentrations/allied-health-evidence/resources/cat/ (accessed 5 July 2015).
Willig, C. (2013) Introducing Qualitative Research in Psychology, 3rd ed., Open University Press,
Berkshire.
World Health Organization (2007) The Challenge of Obesity in the WHO European Region and the
Strategies for Response, WHO Regional Office for Europe, Copenhagen.

View publication stats

S-ar putea să vă placă și