Sunteți pe pagina 1din 9

CHAPTER 1: INTRODUCTION

ROLE OF THE CLINICIAN

Role of clinician conducting assessments:


-Answer specific questions and make clear, reasonable recommendations to help improve
functioning. So goal not to merely describe a person but rather develop relevant answers to
specific questions and present recommendations.
-Be an expert in human behaviour & deal with complex processes & understand test scores in
the context of a person’s life (so not the objective psychometric approach!).

PATTERNS OF TEST USAGE IN CLINICAL ASSESSMENT

The time spent doing assessments has decreased over the years, probably due to the widening role
of psychologists (not only assessing anymore), criticism on reliability & validity of many
assessment devices, and more other activities beyond administration and interpretation of
traditional tests (e.g. interviewing, observing). Also decline in projective techniques (e.g.
Rorschach).

First clinical interviews were unstructured (free association etc.), then criticism so objective tests,
then structured interviewing. Another trend was neuropsychological assessment with 2 traditions: Commented [TP1]: Read traditions again
(1) pathognomonic sign approach  interpret behaviours as indicative of organic impairments,
and base their interview design/tests on a flexible method of testing possible hypotheses for
different types of impairment, and (2) psychometric approach  more quantitative approach that
relies on critical cut-off scores to distinguish between normal persons and those with brain
damage. In practice mostly a combi. Then behaviour therapy. Currently: a psychologist doing
assessment might include techniques as interviewing, administering, and interpreting traditional
psychological tests, naturalistic observations, neuropsychological assessment, and behavioural
assessment. Future: influence of technology.

EVALUATING PSYCHOLOGICAL TESTS

Questions to ask when evaluating psychological tests:

1. Theoretical orientation: research the construct that the test is supposed to measure.
1. Do you adequately understand the theoretical construct the test is supposed to measure?
2. Do the test items correspond to the theoretical description of the construct?
2. Practical considerations:
1. If reading is required by the examinee, does their ability match the level required by the
test?
2. How appropriate is the length of the test?
3. Standardization: adequacy in norms, or in administration.
1. Is the population to be tested similar to the population the test was standardized on?
2. Was the size of the standardization sample adequate?
3. Have specialized subgroup norms been established?
4. How adequately do the instructions permit standardized administration? (e.g. same rooms,
same amount of time, etc.)
4. Reliability: degree of stability, consistency, and predictability.
1. Are reliability estimates sufficiently high (generally around .90 for clinical decision making
and around 0.70 for research purposes)?
2. What implications do the relative stability of the trait, the method of estimating the
reliability, and the test format have on reliability?
5. Validity:
1. What criteria and procedures were used to validate the test?
2. Will the test produce accurate measurements in the context and for the purpose for which
you would like to use it?

RELIABILITY
 Reliability  the extent to which scores obtained by a person are/would be the same if the
person is re-examined by the same test on different occasions.
-Purpose: estimate the degree of test variance caused by error. Four methods for obtaining
reliability are (1) the extent to which the test produces consistent results upon retesting (test-
retest, time to time), (2) the relative accuracy of a test at a given time (alternate forms, form to
form), (3) the internal consistency of the items (split-half and coefficient alpha, item to item),
and (4) the degree of agreement between two examiners (interscorer, scorer to scorer).
Underlying reliability is:
 Error of measurement  an estimate of the range of possible random fluctuation that can be
expected in an individual’s score (e.g. misreading of items, change in mood). Always present
in the current system of psychological construct measuring. If there is a large degree of error,
you can’t place much confidence in the scores. Reducing measurement error gives you greater
confidence that the difference between one score and another is more likely to result from
some true difference than chance.

Test-retest reliability

Test-retest reliability  determined by administering the test & then repeating it on a second
occasion. The reliability coefficient is calculated by correlating the scores obtained; the degree of
correlation between the 2 scores indicates the extent to which the test scores can be generalized
from one situation to the next. Correlation high? Results less likely to be caused by random error
and more by actual difference in the trait being measured.
-Preferred only if the variable being measured is relatively stable (so not for anxiety for example).
-Consideration factors: practice effect  some tasks improve by practice; interval between
administrations, and life changes (e.g. intelligence likely to be stable in months, but from high
school to college changes).

Alternate forms

Alternate forms  measuring a trait several times on the same individual using parallel forms of
the test; the different forms should produce similar results. the reliability coefficient is the degree
of similarity between the scores.
-Correlations determined by tests given with a wide time interval show not only a measure of
relation between forms but also temporal stability!
-Less practice effect than with test-retest.
-Difficulty: are the forms actually equivalent to each other? Otherwise, you’re not measuring the
reliability of the test itself but actual differences in performance!
Internal consistency: split-half reliability and coefficient alpha

Measures of the internal consistency of the test items rather than the temporal stability of
different administrations. Best techniques for determining reliability for a trait with a high degree
of fluctuation.
-Split-half method  the test is split in half and correlating the items. Often split in odd/even
items, because splitting it in half has cumulative problems (e.g. effects of warming up, fatigue,
boredom).
-Coefficient alpha  correlates all items with each other to determine their consistency.
-Limitations: split-half gives fewer items on each half, which results in wider variability because
the individual responses cannot stabilize as easily around a mean.
-General principle: the more items, the higher the reliability, because items compensate for minor
alterations!

Interscorer reliability

Interscorer reliability  obtain a series of responses from a single client and have these
responses scored by two different individuals; or have two different examiners test the same client
using the same test and then determine how close their scores or ratings of the person are.
Interscorer coefficient can be calculated using a percentage agreement, a correlation, or
coefficient kappa. ‘

Selecting forms of reliability

 The best form is dependent on the nature of the variable (e.g. stable or not) and the purposes
for which the test is used (e.g. measuring a state).
 Standard error of measurement (SEM)  the amount of error that can be expected for test
scores, consisting of truth and error (usually included in test manual). The higher the
reliability, the lower the error. Is a standard deviation score:
 Confidence interval  the range of error that a score is expected to fall in. e.g. a SEM of 3
on an intelligence test would indicate that an individual’s score has a 68% chance of being
within 3 IQ points from the estimated true score. Commented [TP2]:
Commented [TP3R2]: Check needed
VALIDITY
 Validity  whether a test truly measures the trait it is supposed to measure.

Content validity

 Content validity  The extent to which a measurement measures all aspects of a construct
(e.g. the affective AND the behavioural dimension of depression). Often considered
subjective because of judgement by experts.
-Related: face validity  the degree to which a test seem like it is measuring what it is
supposed to measure, but judged by the test users.

Criterion validity

 Criterion validity  the extent to which a measure is related to an outside measure, e.g.
correlate an intelligence test score with grade point average. Divided into:
 Concurrent validity  measurements taken at (approx.) the same time as the test, e.g.
intelligence test at the same time as academic achievement assessment.
 Predictive validity  outside measurements that were taken some time after the test scores
were derived, so predictive validity might be evaluated by correlating e.g. intelligence test
scores with measures of academic achievement a year after the initial testing.
 Which one to use depends on purpose of test: predictive validity for predicting some future
outcome (e.g. for screening individuals who might develop emotional disorders), concurrent
validity for assessment of client’s current state.
 Strength of criterion validity depends on the type of variable; e.g. intellectual tests give
relatively higher validity coefficients than personality tests because those have a higher
number of influences variables.
 Criterion contamination  where the criterion measure is biased, because knowledge of the
test results influences an individual’s later performance. Commented [TP4]: Which results?

Construct validity

 Construct validity  the extent to which a measurement measures a specific construct or


trait. Involves three steps: (1) test constructor must make a careful analysis of the trait, (2) test
designer must consider the ways in which the trait should relate to variables, and (3) the test
designer needs to test whether these hypothesized relations actually exists. Example: a test
measuring dominance should have a high positive correlation with the individual accepting
leadership roles and a high negative correlation with submissiveness.
 No single best approach to determining construct validity. Examples are correlating
population’s test scores with age for abilities that are expected to increase with age, or
measure the effects of treatment interventions with pre- and post-test scores, or use factor
analysis, etc.
 Construct validity is the strongest and most sophisticated approach to test validation!
 Sensitivity of an instrument  the percentage of true positives that the instrument has
identified, e.g. a structured interview might be sensitive in that it accurately identified 90% of
people with schizophrenia.
 Specificity of an instrument  the percentage of true negatives, e.g. the instrument might not
be specific in that 30% of individuals without schizophrenia are incorrectly classified as
having schizophrenia (a true negative rate of 70%).

VALIDITY IN CLINICAL PRACTICE

INCREMENTAL VALIDITY
Incremental validity (gradual, progressive) the extent to which a measurement is able to produce
information above what is already known, aka if it adds much information to what can be
obtained with simpler already existing methods.
-If a test can produce additional information to what you already know about a client/group
from other tests.

CONCEPTUAL VALIDITY
Conceptual validity  a means of evaluating and integrating test data so that the clinician’s
conclusions make accurate statements about the examinee. Concerned with testing constructs
(like construct validity), but in this case the constructs relate to the individual rather than the test
itself. Hypotheses can be considered to represent valid constructs regarding a person if they are
confirmed by e.g. observation, test data, history, etc.
CLINICAL JUDGEMENT

Clinical judgement  a special instance of perception in which the clinician attempts to use
whatever sources are available to create accurate descriptions of the client. Sources include test
data, case history, medical records, personal journals, verbal & nonverbal observations, etc.

DATA GATHERING AND SYNTHESIS


Issues to consider with gathering data and synthesizing hypotheses:
-If no optimum level of rapport, data obtained from a person could be less accurate.
-Interview often follows client’s responses, and these might be non-representative thanks to a
temporal condition (e.g. stressful day) or faking.
-Therapist might have a bias which could alter the questions they ask (e.g. trying to confirm some
hypothesis based on first impressions).

ACCURACY OF CLINICAL JUDGEMENTS


After data collection, clinicians need to make final judgements regarding the client and the
relative accuracy of these judgements is crucial, and can be subject to bias and error, such as not
taking into account the rate at which a particular behaviour/trait occurs in general population,
having a confirmatory bias and not wanting to disconfirm your initial theory, hindsight bias or
overestimating what they thought they knew before receiving the outcome knowledge (‘’I would
have known it all along’’), overconfidence, etc. Commented [TP5]: Review factors

8 recommendations to improve accuracy:


1. To avoid missing crucial info, use comprehensive, structured, or at least semi-structured
approaches to interviewing.
2. Don’t only consider the data that support your hypotheses, but also carefully consider or list
evidence that doesn’t support; reduces hindsight and confirmatory bias.
3. Diagnoses should be based on careful attention to the specific criteria in the DSM-5 or the
ICD-10 (reduces error caused by inferences biased by gender and ethnicity).
4. Avoid relying on memory and refer to careful notes as much as possible.
5. In making predictions, clinicians should attend to base rates as much as possible.
6. Seek feedback when possible regarding the accuracy and usefulness of your judgements.
7. Learn as much as possible regarding the theoretical and empirical material relevant to the
person/group you’re assessing (helps develop strategies for obtaining info, allows for correct
estimates regarding judgement, etc.).
8. Be familiar with the literature on clinical judgement in order to continually update your
knowledge on past and emerging trends.

CLINICAL VERSUS ACTUARIAL PREDICTION


Although actuarial approaches (statistical) outperform clinical approaches, they are not always as
useful since clinical approaches allow for deeper analysis of client, including their unique
situations, context, and the decisions facing them, aka actuarial approaches to static and
simplistic. Also, humans are not stable so actuarial formulas may not apply. Optimal would be
clinician utilizing formal prediction formulas.
PHASES IN CLINICAL ASSESSMENT

Although discussed separately, in practice usually occur simultaneously and interact!

Hypothesis testing model for interpreting assessment data:

 Phase 1, evaluating the referral question: one of the most important general requirements is
that clinicians understand the vocabulary, conceptual model, dynamics, and expectations of
the referral setting in which they will be working. Further, clinicians must evaluate whether
the referral questions are appropriate for psychological assessment and whether they have a
level of competence necessary to conduct an assessment to answer the specific questions. So
clarify the referral question!
 Phase 3: regardless of theoretical orientation, the hypotheses must make sense within a
specific theoretical framework (e.g. low self-esteem may revolve around negative self-talk,
based on a cognitive behavioural perspective).
 Phase 8: recommendations cannot be vague or broad, e.g. not recommending ‘’therapy’’ to a
client.
CHAPTER 2: CONTEXT OF CLINICAL
ASSESSMENT
TYPES OF REFERRAL SETTINGS
Referral requests often do not state a specific question that must be answered (e.g. ‘’could you
evaluate Jimmy because he is having difficulties in school?’’) or a decision that must be made,
although many times this is the position that the referral source is in, e.g. a teacher may want to
prove to parents that their child has a serious problem, or a school administrator may need testing
to support a placement decision. Greater clarification necessary to provide useful problem-solving
info! Responsibility for exploring and clarifying the referral question lies with the clinician.
To help clarify the referral question, clinicians should be familiar with the types of environments
in which they will be working:

PSYCHIATRIC SETTING
Psychiatrists could have the role of administrator, therapist, or physician.

 Administrator on a ward (makes decisions about e.g. suicide risk, admission/discharge,


suitability of medical procedures). Important to know what info the administrator is looking
for, e.g. what method of therapy would be most effective.
 Therapist: usually questions about appropriateness of the client for therapy, which strategies
are most likely to be effective, etc.
 Physician: important to have effective communication & bridge the conceptual differences
between physician and psychologist (e.g. one has medical model, the other speaks more in
terms of difficulties of living with people and society).

GENERAL MEDICAL SETTING


To adequately work in this setting, psychologists must become familiar with medical
descriptions. Note: physicians must take ultimate responsibility for their decisions, even when
they ask for help of a psychologist! Often asked e.g. underlying psychological disorder, presence
of possible neuropsychological disorders, but also if a surgery could cause psychological distress
and if there are early signs of a psychological disorder.

LEGAL CONTEXT
Psychologists might be called in at any stage of legal decision making. Must become familiar with
specialized legal terms and evaluate possible malingering and deception. The practice of forensic
psychology includes training/consultation with legal practitioners, evaluation of populations
likely to encounter the legal system, and the translation of relevant technical psychological
knowledge into usable information.

ACADEMIC/EDUCATIONAL CONTEXT
Assessing children who are having difficulty, e.g. in evaluating the nature and extent of a child’s
learning difficulties, measuring intellectual weaknesses AND strengths, assessing behavioural
difficulties, etc. Individual assessment conducted, but wider context very important! (e.g. child’s
dysfunction might be caused by marital problems).
PSYCHOLOGICAL CLINIC
In contrast to the medical, legal, and educational institutions where the psychologist serves as a
consultant, the psychologist working in a psychological clinic often is the decision maker. Mostly
self-referred clients or children referred by parents, or by GP.

ETHICAL PRACTICE OF ASSESSMENT

Ethical guidelines reflect values that professional psychology endorses (e.g. client safety,
confidentiality, fairness, etc.).

DEVELOPING A PROFESSIONAL RELATIONSHIP


Assessment should be conducted only in the context of a clearly defined professional relationship,
where the nature/purpose/conditions of the relationships have been discussed and agreed on.
Usually clinician provides relevant info (e.g. type and length of assessment, details,
confidentiality, etc.) followed by the client’s signed consent. Quality of relationship can have
impact on both assessment results (e.g. children score higher on IQ when familiar with examiner)
and overall working relationship! Note: examiners should check themselves to assess whether
their relationship with the client is interfering with the objectivity and standardization of the test
administration and scoring.

ISSUES RELATED TO INFORMED CONSENT


Any consent involves a clear explanation of what procedures will occur, the relevance of the
testing, and how the results will be used. Stress confidentiality and possible limitations to it,
describe the nature & intent of the test generally (but be careful with foreknowledge that could
alter the test’s validity, e.g. measuring sociability could make patients answer in a specific way).
-Public concern about invasion of privacy by assessment, because unforeseen events not covered
in the information may occur that reveal aspects of the client they would rather keep secret.
Based on misconceptions of accuracy and scope of usage of tests, and misuse of data.
-Issues with inviolacy, which involves the actual negative feelings created when clients are
confronted with the test or test situation (e.g. test with taboo topics).

LABELLING AND RESTRICTION OF FREEDOM


Negative labelling consequences: stigma, self-fulfilling prophecy, people don’t consider
themselves responsible for their behaviour because of an ‘’invading disorder’’, helpless role, etc.

INTERPRETATION AND USE OF TEST RESULTS


Not simply using norms and cut-off scores but also taking into consideration unique
characteristics of the person combined with relevant aspects of the test itself. Test norms etc. can
become outdated, so if a clinician has not updated their test knowledge in at most the past 10
years, they are probably not practicing competently (bevoegd).

COMMUNICATING TEST RESULTS


Effective feedback involves understanding the needs and vocabulary of the referral source, the
client, and other persons who might be affected by the test results (e.g. parents). There should be
a clear explanation of the rationale for testing & nature of tests, e.g. ‘’your child is currently
functioning in the top 2% compared to her peers and is particularly good at organizing’’. Note:
providing feedback can be part of the intervention process itself, as it could have a symptom-
reducing effect.
MAINTENANCE OF TEST SECURITY AND RELEASE OF TEST DATA
Maintaining test security is an ethical obligation, but also a legal requirement related to trade
secrets and agreements made with test publishers. Also, if they were available to everyone they
would lose their validity. Security of assessment results also important! Actually test results only
between client & referral source, but in practice difficult.

 Test data  raw and scaled scores, such as subscale scores and test profiles.
 Test materials  manuals, instruments, protocols, and test questions or stimuli.
 Test materials turn into test data as soon as a psychologist places the client’s name on the
materials!

ASSESSING DIVERSE GROUPS

S-ar putea să vă placă și