FORUM

Robustness analysis in practice
 

Jacques Pictet

Bureau AD, Lausanne, Switzerland

 

Introduction

When we try, in our practice, to explain the robustness analysis (RA) to the actors, we often use the analogy of the parachutist in a survival test. Ignoring the situation she is supposed to face when landing, the parachutist will try to map the area during her descent. It allows her to observe her environment beyond what she will experiment at the impact point. If she falls in water, it is vital to know if she is in a small lake or river, near a shore or in the middle of an ocean. To achieve this, she can only use the limited resources she has: a limited capacity to observe distant objects and a limited time before concentrating on the landing.

We are usually required to perform Multiple criteria decision analysis (MCDA) for problems with a discrete number of alternatives to be ranked, using either one of the Electre methods or some form of weighted sum. Very often, we face multiple experts and multiple decision-makers. So, we will consider here these specific cases only, focusing on the practical problems that arise in such situations.

 

Basic patterns

 The vast majority of the projects we deal with follows on of the two following patterns:

·       Planning: There is a planning issue to handle, with many actors – experts and decision-makers – following an ad hoc procedure. Usually, we propose to achieve a consensus about the evaluations to consider, but to allow the decision-makers to provide their own weights. We often use Electre III for the aggregation. For an example, see (Bollinger, Pictet, 2003).

·       Public procurement: A public authority prepares a call for tenders and contracts us to help them with the mathematical aspects. The existing practice relies on the weighted sum and we have to deal with specific constraints: (1) the criteria and their importance have to be published with the call for tenders, thus (2) the definition of the weights require the use of reference tenders and (3) the representatives of the authority have to agree on a given set of weights, or at least on intervals, for legal reasons[1].. For details, see (Pictet, Bollinger, 2003).

 

These patterns correspond to different ways to work with groups, according to (Belton, Pictet, 1997):

·       The first one usually uses shared evaluations and individual weights. Usually, the evaluations are provided by individual experts – and validated by the other actors - and weights by individual decision-makers. Sometimes, a clear segregation is maintained between these two groups. The robustness analysis can be designed freely according to the specific situation at hand.

·       The second one uses shared evaluations and weights Usually, experts and decision-makers work in close contact, or are even the same persons. To obtain shared information, the silent negotiation is sometimes used (Pictet, Bollinger, 2005). Robustness analysis is rather strictly limited by the legal constraints.

 

Group decision and robustness analysis

There is a need to clarify the relationship between group decision and robustness analysis. Both generate multiple results, if individual evaluations or, more frequently, individual weights, are used. This implies a need to synthesize these results, as we will discuss below. But they stem from different origins. In a group decision, different weights represent the diversity of the values systems present in the group, and different evaluations represent the diversity of understanding of the performances. In the robustness analysis, different weights and / or evaluations express some of the uncertainties and inaccuracies that are inherent to any modeling activity (Roy, 1989)[2].

So, these two notions should not be confused. The very fact that they can – and possibly have to – be handle with the same tools does not mean that they can be put together in a big bag and treated indistinctly. They belong to two different facets of decision aid: validation and legitimization (Landry et al., 1996). Robustness analysis is an important part of a MCDA model validation, as it tries to identify the extent of the results validity, following the experimental principle that “One measure is no measure”. Group decision plays a central role in the legitimization of a MCDA model, as it tries to integrate into the model some aspects that are in direct connection with the on-going social process (Pictet, 1996).

To respect these two facets, we tend to follow these lines: (1) provide decision-makers with individual results, including RA about their weights, evaluations (individual or shared) and other parameters, (2) provide them with some support to compare or aggregate individual results, as part of the final negotiation.

 

Practicalities of RA

A robustness analysis is made of two distinct phases. The first one is an “opening” phase during which a tests are performed, generating a certain number of results. The second one is a “closing” phase during which some form of synthesis tries to capture the essence of the information obtained during the first phase.

 

Opening phase

In practice, there are two main procedures to perform the opening phase of RA. The first one is “manual” and tends to use a given set of information as “central” and then to move away from it by mixing more and more different sets of information, selected in a decreasing level of relevance; it can be labeled as a “star” (Maystre et al., 1994) or a “concentric circles” procedure. The second one is more “automatic” and intends to systematically test all combinations of information sets, in a Monte Carlo way.

So far, we only practiced the “manual” procedure in planning cases, due to the lack of an “automatic” one for Electre methods[3]. But we are confident that, in the future, this feature will be integrated in any MCDA software. In public procurement cases, we implemented some form of an “automatic” procedure, at least to test the impact of a weight variation for the most important criterion. This is easy, due to the linearity of the weighted sum, but other aspects need further consideration.

 

Closing phase

Providing the decision-maker(s) with a wealth of results is usually seen in the literature as positive, but little is said about the way to handle it in order to make sense of it. So far, we have seen three procedures to perform this closing phase. The “ad hoc” one tries to present, in a more or less systematic way, what is robust – or less so – in the results. The “comparing” one put them side-by-side – usually for the weighted sum – or overlays them – usually for outranking methods (Pictet et al., 1996) – to allow an overall visual analysis. The “aggregating” one intend to present one single result that summarize all results.

This last procedure seems more convenient when using the weighted sum, but we are going to propose in the near future a solution for the outranking methods (Pictet, Bollinger, 2007). In our mind, the various results shall be weighted according to the credibility of the information underlying them (e.g. scenarios, parameters), but not according to the assumed importance of the decision-makers.

 

Conclusion 

Robustness analysis is an important issue, both for theory and practice. Further research is necessary to precise how to perform it in a sound way. One has to keep in mind that the gain it provides – in terms of a better understanding of the model outcomes – is counterbalanced by the complexity to handle it in a way that helps the decision-maker(s).

In the past, RA was problematic, due to the amount of work necessary to perform it. Nowadays, computers can handle it easily. The challenge is thus more in the direction of its relevance for both the validation and legitimization of the decision model.

 

References

Belton V., Pictet J., 1997, “A framework for group decision using a MCDA model: Sharing, aggregating or comparing?”, Journal of Decision Systems 6(3), pp. 283-303.

Bollinger D., Pictet J., 2003, “Potential use of e-democracy in MCDA processes. Analysis on the basis of a Swiss case”, Journal of Multi-criteria decision analysis 12, pp. 65-76.

Landry M., Banville C., Oral M., 1996, “Model legitimization in operational research”, European Journal of Operational Research 92, pp. 443-457.

Maystre L. Y., Pictet J., Simos J., 1994, Méthodes multicritères Electre. Description, conseils pratiques et cas d’application à la gestion environnementale, Presses polytechniques et universitaires romandes, Lausanne.

Pictet J., 1996, Dépasser l’évaluation environnementale. Procédure d’étude et insertion dans la décision globale, Presses polytechniques et universitaires romandes, Lausanne.

Pictet J., Bollinger D., 2003, Adjuger un marché au mieux-disant. Analyse multicritère, pratique et droit des marchés publics, Presses polytechniques et universitaires romandes, Lausanne.

Pictet J., Bollinger D., 2005, “The silent negotiation or How to obtain collective information for group MCDA without excessive discussion”, Journal of Multi-Criteria Decision Analysis 13, pp. 199-211.

Pictet J., Bollinger D., 2007, “Partial aggregation of partial aggregation results”, (forthcoming).

Pictet J., Maystre L. Y., Simos J., 1994, “Surmesure. An instrument for presentation of results obtained by methods of the Electre and Prométhée families” in Applying multiple criteria aid for decision to environmental management, M. Paruccini (Ed.), Kluwer, Dordrecht, pp. 291-304.

Roy B., 1989, “Main sources of inaccurate determination, uncertainty and imprecision in decision models”, Mathematical and Computer Modelling 12(10/11), pp. 1245-1254.


 

[1]               For instance, the European union legislation requires that the weights, or, at least, “reasonable” weight intervals are published.

[2]               We will not discuss here how some of these aspects can be integrated directly in the “basic” evaluations.

[3]               We have heard recently of such a procedure for Electre III, but it seems to use its median preorder, and we tend not to use it.

 

EWG-MCDA Newsletter, Series 3, No.15, Spring 2007



[HOME - EWG MCDA]    [Newsletter articles of the EWG - MCDA]