Sunteți pe pagina 1din 18

 A decision tree is a flowchart-like structure in which each

internal node represents a "test" on an attribute (e.g.


whether a coin flip comes up heads or tails), each branch
represents the outcome of the test, and each leaf node
represents a class label (decision taken after computing all
attributes). The paths from root to leaf represent
classification rules.
 In decision analysis, a decision tree and the closely
related influence diagram are used as a visual and
analytical decision support tool, where the expected
values (or expected utility) of competing alternatives are
calculated.
 A decision tree consists of three types of nodes:[1]
 Decision nodes – typically represented by squares
 Chance nodes – typically represented by circles
 End nodes – typically represented by triangles
 A decision tree provides a framework to
display graphically primary variables,
including treatment options, outcomes
associated with those treatment options, and
probabilities of the outcomes. The researcher
can then algebraically reduce all these factors
into a single value, allowing for comparison.
 The use of decision analysis can assist in conducting
various economic evaluations, including CEA. Although not
necessary for all pharmacoeconomic evaluations, decision
analysis and decision trees can provide a solid backbone
or platform for the decision at hand. Using a decision tree,
treatment alternatives, outcomes, and probabilities can be
presented graphically and can be reduced algebraically to
a single value for comparison (i.e., cost-effectiveness
ratio).

 When comparing antiemetic agents for the development of


a policy for CIE prevention, CEA can be employed. Many of
these agents differ with respect to effectiveness, safety,
and cost. By performing a thorough CEA, these variables
can be reduced to a single number (cost-effectiveness
ratio), which will allow for a meaningful comparison. The
treatment alternative with a better cost-effectiveness ratio
than the others (i.e., lower cost per unit of outcome) would
be selected and promoted for use.
 Below contains an example of a decision tree
illustrating how the probabilities of various outcomes
can be organized.
 To calculate the ACER for drug A using “averaging out
and folding back,” these steps are followed:

 ++ Multiply the cost of path 1 by the probability of no ADE


($250 × 0.89). Repeat for path 2 ($400 × 0.11).
 Add these two numbers and multiply by the probability of
success ($266.50 × 0.93 = $247.80).
 Repeat the two preceding steps for paths 3 and 4, and then add
the resultant values ($247.80 + $50.50 = $298.30).
 Add the cost of the drug to this value ($298.30 + $60), and
divide by the probability of a success (93%, or 0.93); thus
$358.30/0.93 = $385.
 Repeat this process for drug B using paths 5 through 8.
 Example of a pharmacoeconomic decision tree comparing
two drugs. Option B is a drug that is more specific for the
target receptor in the body, is more effective, and
produces fewer adverse effects than does option A.
However, because drug B is more expensive than drug A,
the cost of the added benefits must be analyzed using
pharmacoeconomic techniques. This figure was completed
using the safety and efficacy values for drugs A and B from
Table Above. Values in color are calculated numbers, only
included to illustrate the process of “averaging out and
folding back.” (ACER, average cost-effectiveness ratio;
ADE, adverse drug event; P, probability [a decimal fraction
between 0 and 1 indicating the likelihood of a particular
event occurring in a given period].)
 One big advantage of the decision tree model is
its transparent nature. Unlike other decision-
making models, the decision tree makes explicit
all possible alternatives and traces each
alternative to its conclusion in a single view,
allowing for easy comparison among the various
alternatives. The use of separate nodes to denote
user defined decisions, uncertainties, and end of
process lends further clarity and transparency to
the decision-making process.
 A major decision tree analysis advantages is its
ability to assign specific values to problem,
decisions, and outcomes of each decision. This
reduces ambiguity in decision-making. Every
possible scenario from a decision finds
representation by a clear fork and node, enabling
viewing all possible solutions clearly in a single
view.
 Incorporation of monetary values to decision
trees help make explicit the costs and benefits of
different alternative courses of action.
 The decision tree is the best predictive model as
it allows for a comprehensive analysis of the
consequences of each possible decision, such as
what the decision leads to, whether it ends in
uncertainty or a definite conclusion, or whether it
leads to new issues for which the process needs
repetition.
 A decision tree also allows for partitioning data
in a much deeper level, not as easily achieved
with other decision-making classifiers such as
logistic regression or support of vector machines.
 Decision trees also score in ease of use. The decision tree
provides a graphical illustration of the problem and
various alternatives in a simple and easy to understand
format that requires no explanation.
 Decision trees break down data in an easy to understand
illustrations, based on rules easily understood by humans
and SQL programs. Decision trees also allow for
classification of data without computation, can handle
both continuous and categorical variables, and provide a
clear indication of the most important fields for prediction
or classification, all unmatched features when comparing
this model to other compatible models such as support
vector or logistic regression.
 Simple math can replicate the explanation of the decisions
contained in the decision tree easily.
 Unlike other decision-making tools that require
comprehensive quantitative data, decision trees remain
flexible to handle items with a mixture of real-valued and
categorical features, and items with some missing
features. Once constructed, they classify new items
quickly.
 Another of the decision tree analysis advantages are that it
focuses on the relationship among various events and
thereby, replicates the natural course of events, and as
such, remains robust with little scope for errors, provided
the inputted data is correct.
 This ability of the decision tree to adopt the natural course
of events allows for its incorporation in a variety of
application as influence diagrams. Decision trees combine
with other decision-making techniques such as PERT
charts and linear distributions.
 A decision tree is the best predictive model. It finds
use to make quantitative analysis of business
problems, and to validate results of statistical tests. It
naturally supports classification problems with more
than two classes and by modification, handles
regression problems.
 Sophisticated decision tree models implemented
using custom software applications can use historic
data to apply a statistical analysis and make
predictions regarding the probability of events.
 Decision trees provide a framework to quantify the
values and probability of each possible outcome of a
decision, allowing decision makers to make educated
choices among the various alternatives.
 They are unstable, meaning that a small change in
the data can lead to a large change in the structure of
the optimal decision tree.
 They are often relatively inaccurate. Many other
predictors perform better with similar data. This can
be remedied by replacing a single decision tree with
a random forest of decision trees, but a random
forest is not as easy to interpret as a single decision
tree.
 For data including categorical variables with different
number of levels, information gain in decision trees is
biased in favor of those attributes with more levels.[7]
 Calculations can get very complex, particularly if
many values are uncertain and/or if many outcomes
are linked.
 Sensitivity analysis attempts to provide a measure of the sensitivity of either
parameters, or forcing functions, or submodels to the state variables of
greatest interest in the model.
 Sensitivity analysis is performed using the following
formula: S = (dx/x)/(dp/p) where S = sensitivity, x = state
variable, P = parameter, dx and dp are change of values of state variables,
parameters, and forcing functions, respectively, at ± 10% level in temporal
scale.

 Those parameters, which are almost impossible to determine from the field
are calibrated using a range of values (minimum to maximum) from the
literature first and further the appropriate value for that parameter for this
estuary is determined according to the best fit of the value during the model
run by using standard calibration procedure
 Using Sensitivity Analysis for decision making
 One of the key applications of Sensitivity analysis is in the
utilization of models by managers and decision-makers. All
the content needed for the decision model can be fully
utilized only through the repeated application of sensitivity
analysis. It helps decision analysts to understand the
uncertainties, pros and cons with the limitations and scope of
a decision model.

Most if not all decisions are made under uncertainty. It is the
optimal solution in decision making for various parameters
that are approximations. One approach to come to conclusion
is by replacing all the uncertain parameters with expected
values and then carry out sensitivity analysis. It would be a
breather for a decision maker if he / she has some indication
as to how sensitive will the choices be with changes in one or
more inputs.
 The key application of sensitivity analysis is to
indicate the sensitivity of simulation to uncertainties
in the input values of the model.
 They help in decision making
 Sensitivity analysis is a method for predicting the
outcome of a decision if a situation turns out to be
different compared to the key predictions.
 It helps in assessing the riskiness of a strategy.
 Helps in identifying how dependent the output is on a
particular input value. Analyses if the dependency in
turn helps in assessing the risk associated.
 Helps in taking informed and appropriate decisions
 Aids searching for errors in the model

S-ar putea să vă placă și