How should MCDA practice respond to behavioural research findings?

Theodor J Stewart
University of Cape Town
South Africa

The decision sciences in general, and MCDA in particular, have wrestled agonizingly over discrepancies between the decision support or aiding models being used for analysis, and the results from behavioural decision science (for example that of Kahnemann and Tversky).  Well some at least have agonized ... there are those who don't seem to care much about empirical behavioural results.  I recall one well-known figure stating during a conference debate that he had little patience with concern over behavioural axioms, preferring to get to the mathematics where one can prove theorems!  Nevertheless, I am sure that the readership of this newsletter are as concerned as I am about these issues, and for this reason I thought I would take this opportunity to put forward some personal views on the debate.

What is the purpose of a model designed for decision aid or support?  It surely is not to describe how people make decisions unaided by MCDA.  If we simply try to mimic what decision makers are going to do without our aid, then we are not adding much value to the process.  On the other hand, it would be arrogant to suggest that we can direct decision makers to what they ought to do, as no model which we develop will ever capture the full richness of human decision making values, preferences and goals.  Thus the role of MCDA (and I am aware that I am repeating what many others have said before me) is to support the process of learning and discovery by which decision makers arrive at a satisfactory solution to their decision problem.

Two immediate questions face the decision analyst or decision aiding scientist when confronted with a discovery that decision makers act in a manner which is systematically at variance with one's favoured decision models.  These are:
 

What, then, is the real value of behavioural decision research to the practice of MCDA?  As I have argued, the right response may often not be to develop more complicated models.  My view is that the most vital contribution from behavioural research relates to the manner in which we seek inputs from decision makers as part of the process of constructing our preference model.  The realization that the perceived acceptability of specific tradeoffs can be strongly influenced by the framing of the problem has serious implications for the manner in which value measurement techniques are applied in practice.  Whether certain tradeoffs are viewed as losses or as the foregoing of gains would appear to derive from the problem framing, and on the basis of research results concerning the effect of framing would be likely to have a substantial influence on the form of value function model derived.  Simulation studies which I have conducted have revealed how sensitive value function models can be to the functional shape of the partial value functions, and it is precisely these functional shapes which may be influenced by the perceptions of what are gains or losses.  I strongly suspect that similar framing issues might also influence perceptions of appropriate veto thresholds in outranking, or aspiration levels in goal programming.  Certainly, similar potential biases may well creep into all methodologies of MCDA.  In particular, all methods make use of some form of direct or indirect weighting of the criteria, which are undoubtedly susceptible not only to framing, but also to anchoring and availability biases.

The implications of the above is that perhaps the greatest challenge to research in MCDA is not so much to refine our decision models any further, but to gain greater understanding of how judgmental biases in user inputs affect the outputs and recommendations of the models, and how we can compensate for these (or at least ameliorate their effects).  Perhaps in this way we can also move closer to meta-MCDA, i.e. a philosophy of MCDA that integrates the various streams of thought which appear sometimes divergent, but which should more appropriately be seen as different responses to the search for balance between transparent and simple decision support on the one hand, and the infinitely rich complexity of real human judgments on the other.