Multicriteria decision aid in Classification problems

Constantin Zopounidis and Michael Doumpos

Technical University of Crete, Dept. of Production Engineering and Management, Financial Engineering Laboratory, University Campus, 73100 Chania, Greece

 

1.         Introduction

Classification problems refer to the assignment of some alternatives into predefined classes (groups, categories). Such problems often arise in several application fields. For instance, in assessing credit card applications the loan officer must evaluate the characteristics of each applicant and decide whether an application should be accepted or rejected. Similar situations are very common in fields such as finance and economics, production management (fault diagnosis), medicine, customer satisfaction measurement, data base management and retrieval, etc.

Addressing a classification problem requires the development of a classification model that aggregates the characteristics of the alternatives to provide recommendations on the assignment of the alternatives to the predefined classes. The significance of classification problems has motivated the development of a plethora of techniques for constructing classification models. Statistical techniques have been dominating the field for many years, but during the last two decades other approaches have become popular mainly from the field of machine learning.

The contributions of MCDA are mainly focused on the study of multicriteria classification problems (MCPs). MCPs can be distinguished form traditional classification problems studied within the statistical and machine learning framework in two aspects (Zopounidis and Doumpos, 2002). The first aspect involves the nature of the characteristics describing the alternatives, which are assumed to have the form of decision criteria providing not only a description of the alternatives but also some additional preferential information. The second aspect involves the nature of the predefined classification which is defined in ordinal rather than nominal terms. Classification models developed through statistical and machine learning techniques often fail to address this issues focusing solely on the accuracy of the results obtained from the model.

The next two sections describe some important issues on the use and implementation of MCDA classification methods mainly regarding the existing criteria aggregation forms, as well as model development and evaluation issues.

 

2.         Criteria aggregation models

Within the MCDA several criteria aggregation forms have been proposed for developing decision models. These include relational forms, value functions, and rule-based models.

Relational models are based on the construction of an outranking relation that is used to compare the alternatives with some reference profiles characterizing each class. The reference profiles are either typical examples (alternatives) of each class or examples that define the upper/lower bounds of the classes. Some typical examples of this approach include methods such as ELECTRE TRI (Roy and Bouyssou, 1993), PROAFTN (Belacel, 2000), PAIRCLAS (Doumpos and Zopounidis, 2004), and PROMETHEE TRI (Figueira et al., 2004). The main advantage of this approach is that it enables the decision maker (DM) to take into account the non-compensatory character of the decision process and to identify alternatives with special characteristics through the incorporation of the incomparability relation in the analysis. On other hand, the construction of the outranking relation requires the specification of a considerable amount of information which is not always easy to obtain.

Value functions have also been quite popular as a criteria aggregation model in classification problems. This approach provides a straightforward methodology to perform the classification of the alternatives. Each alternative is evaluated according to the constructed value function and its global evaluation is compared to some value cut-off points in order to perform the assignment to one of the predefined classes. Due to their simplicity linear or additive value functions are usually considered (Jacquet-Lagrèze, 1995; Zopounidis and Doumpos, 1999, 2000). These provide a simple evaluation mechanism which is generally easy to understand and implement. However, there has been criticism on the assumptions underlying the use of such simple models and their ability to capture the interactions between the criteria.

Rule-based models provide a completely different point of view compared to the previous two approaches. Rule-based models are function-free and they are usually expressed in symbolic forms, such as “if … then …” decision rules. Recently, in this framework a complete and well-axiomatized methodology has been proposed for constructing decision rule preference models from decision examples, based on the rough sets theory (Greco et al., 1999, 2001). Each “if … then …” decision rule is composed from a condition part specifying a partial profile on a subset of criteria to which an alternative is compared using the dominance relation, and a decision part suggesting an assignment of the alternative to “at least” or “at most” a given class. The main advantage of rule-based models involves their natural and easy interpretation. On the other hand, however, such models do not provide some form of performance index that will enable the DM to assess the relative performance of the alternatives. Such information is often needed as a complement to the classification of the alternatives for further decision support.

Obviously, there are different available specifications for the general form of a multicriteria classification model and there are also several variations of the general schemes described above. Each approach has advantages and disadvantages, but it would be impossible to provide a clear recommendation for the most appropriate form. This depends solely on the requirements of each decision situation and the nature of the classification problem that is considered.

 

3.         Model development and validation

The development and evaluation of a model is a crucial point in addressing a classification problem. Model development involves the specification of the parameters of the model, whereas model evaluation refers to the analysis of the characteristics of the final model regarding its interpretability and performance.

Within the traditional MCDA paradigm it is assumed that the model is developed through the co-operation between the decision analyst and the DM. In this case the DM specifies all the preferential information that is required to structure and implement the model. For problems of limited size (small number of alternatives and criteria) as well as in problems of non-repetitive character this could be a feasible process. However, in many cases, implementing such an approach is cumbersome with regard to the cognitive effort required by the DM and the time required to elicit preferential information.

Preference disaggregation techniques (Jacquet-Lagrèze and Siskos, 2001) have been successfully applied to address these issues in classification problems. Within this context the DM is asked to provide some representative decision examples (reference alternatives). These examples involve alternatives that are evaluated by the DM and are classified into the predefined classes. Thus, each example and its classification provide a representation of the DM’s judgment policy and preferential system. Given that a sufficient number of examples is available, it is possible to perform a disaggregation analysis in order to identify the parameters of the model, such that the model’s results is as consistent as possible with the DM’s classification of the reference alternatives.

In adopting this kind of approach there are two issues that should be carefully considered. The first involves the measures used to assess the consistency of the model’s results, whereas the second involves the way that the disaggergation is implemented to optimize consistency.

The consistency measure that is most widely used in this optimization process involves the classification error rate representing the proportion of the reference alternatives for which there is a disagreement between the model’s outputs and the DM’s classification. A number of alternative measures have also been proposed (e.g., the receiver operating characteristic curve). Actually this is an active research topic in the classification research (Schiavo and Hand, 2000).

Given a selected consistency measure, mathematical programming techniques (linear and non-linear) have become popular over the past few years as an efficient approach to model development. These involve the solution of appropriate optimization problems to identify the optimal parameters of the models that maximize the selected consistency measure. Several linear and non-linear programming formulations have been proposed within this context to develop MCDA classification models that are expressed in relational or functional form (Dias et al., 2002, Mousseau and Slowinski, 1998, Zopounidis and Doumpos, 1999, 2000). Rule induction algorithms have also been proposed for rule-based models (Greco et al., 1999, 2001).

Of course, it should be emphasized that the definition and optimization of a consistency measure for the development of multicriteria classification models in a preference disaggregation context is not a straightforward process. This means that one should not consider the development of a multicriteria classification model as a simple process where some input data are introduced to an optimization procedure to obtain the optimal model. Careful analysis of the estimated model’s parameters is required to ensure that they are in accordance with the DM’s preferential system. This is a crucial point since it is often observed that a model can be highly consistent with the classification of the reference alternatives, yet its parameters are difficult to interpret from the DM’s point of view. A classification model that fails this kind of validation is highly likely to be useless in practice, either because the DM does not feel confident on the structure of the model or because the model’s results are incorrect. Additional model validation and verification of the model’s performance is also often necessary using new decision examples, other than reference alternatives used during model development.

A final important issue that needs to be stretched involves scalability and computational efficiency. As the volume of data increases the tools and procedures used for developing classification models should be able to accommodate the need of handling large data sets in an efficiency way. The significance of this issue is highlighted by the fact that the development of a classification model is performed through an iterative and interactive process. Therefore, the implementation of such a process in real time for large data sets can only be achieved if the techniques used for model development are computationally efficient.

 

4.         Conclusions and perspectives

The research on classification problems has evolved rapidly over the past two decades. The MCDA paradigm has contributed positively in addressing classification problems with a multicriteria character. However, there are still several interesting topics that need further investigation. Up to now, most MCDA studies have focused on the development of new MCDA classification methods and new techniques for model development. Future research should consider issues such as the validation of the new methods and techniques that are developed, the analysis of their parameters, their extensions to large data sets, the connectives between MCDA research and other disciplines that are related to classification problems, as well as analysis and reconsideration of the consistency measures used for the development of multicriteria classification models. Other issues also include the robustness of the models to changes in the problem data or the parameters of the methods, the modeling of classification problems in dynamic decision environments, as well as the development of methods to assess the quality that each criterion provide in a classification context.

 

References

Belacel, N. (2000), “Multicriteria assignment method PROAFTN: Methodology and medical applications”, European Journal of Operational Research, 125, 175-183.

Dias, L., Mousseau, V., Figueira, J., Climaco, J. (2002), “An aggregation/ disaggregation approach to obtain robust conclusions with ELECTRE TRI”, European Journal of Operational Research, 138(2), 332-348.

Doumpos M. and Zopounidis C. (2004), “A multicriteria classification approach based on pairwise comparisons”, European Journal of Operational Research, 158, 378-389.

Figueira, J., De Smet, Y. and Brans, J.P. (2004), “MCDA methods for sorting and clustering problems: Promethee TRI and Promethee CLUSTER”, Université Libre de Bruxelles, Service de Mathématiques de la Gestion, Working Paper 2004/02 (http://www.ulb.ac.be/polytech/smg/indexpublications.htm).

Greco, S., Matarazzo, B. and Slowinski, R. (1999), “The use of rough sets and fuzzy sets in MCDM”, in: T. Gal, T. Hanne and T. Stewart (eds.), Advances in Multiple Criteria Decision Making, Kluwer Academic Publishers, Dordrecht, 14.1-14.59.

Greco, S., Matarazzo, B. and Slowinski, R. (2001), “Rough sets theory for multicriteria decision analysis”, European Journal of Operational Research, 129, 1-47.

Jacquet-Lagrèze, E. (1995), “An application of the UTA discriminant model for the evaluation of R & D projects”, in: P.M. Pardalos, Y. Siskos, C. Zopounidis (eds.), Advances in Multicriteria Analysis, Kluwer Academic Publishers, Dordrecht, 203-211.

Jacquet-Lagrèze, E. and Siskos, J. (2001).  Preference disaggregation: Twenty years of MCDA experience, European Journal of Operational Research, 130, 233-245.

Mousseau, V. and Slowinski, R. (1998), “Inferring an ELECTRE-TRI model from assignment examples”, Journal of Global Optimization, 12/2, 157-174.

Roy, B., and Bouyssou, D. (1993), Aide Multicritère à la Décision: Méthodes et Cas, Economica, Paris.

Schiavo, R.A. and Hand, D.J. (2000), “Ten more years of error rate research”, Journal of Statistical Science, 63(3), 295-310.

Zopounidis, C. and Doumpos, Μ. (1999), “A multicriteria decision aid methodology for sorting decision problems: The case of financial distress”, Computational Economics, 14(3), 197-218.

Zopounidis, C. and Doumpos, M. (2000), “Building additive utilities for multi-group hierarchical discrimination: The MHDIS method”, Optimization Methods and Software, 14(3), 219-240.

Zopounidis, C. and Doumpos, M. (2002). “Multicriteria Classification and Sorting Methods: A Literature Review,” European Journal of Operational Research 138(2), 229-246.


EWG-MCDA Newsletter, Series 3, No. 10, Fall 2004