Preference Elicitation is a well-known bottleneck in decision analysis and decision automation tasks. The best description of a userís preference is via a utility function. However, obtaining a good utility function from a lay user requires the assistance of an expert. For example, suppose that you want to help a customer choose an appropriate configuration for a PC, or select the vacation that is most suitable for him, or search a large online database with whose content she is not familiar. An expert decision analyst will not be around. This is why we have been developing the class of xCP-Networks, a family of qualitative and quantitative models of preference that are based on intuitive statements that lay users find natural. The aim of our work is to provide a tool that is:
In the past few years, we have been able to provide some interesting algorithms, complexity results, and applications in the area of content management and personalization, rich-media messaging, airline reservation, and we are now working on a more sophisticated methodology for obtaining even better models. Our approach provides a new class of knowledge-based recommender systems
Planning is concerned with the generation of plans, i.e., specifications of behavior, for a system that will lead it to achieve certain goals. Thus, instead of telling the system what to do, you combine a model of the systemís operations with a set of goals, and the planning algorithm generates an appropriate behavior. I have been working in this area on various models, including the classical one (where the world is deterministic) and the decision-theoretic one (where we use stochastic models and more complex goal descriptions). Currently, I am look at the integration of planning and preference-based optimization with the aim of providing algorithms that generate an optimal plan, rather than just any plan
Reinforcement learning is concerned with the problem of learning to behave in an unknown environment based on the feedback we receive (e.g., success and failure of our actions). This task becomes even more difficult when we are dealing with environments in which there are other agents. R-max, an algorithm developed with Moshe Tennenholtz, is the first algorithm for learning in competitive multi-agent environment that has theoretical guarantees both on its effectiveness and its efficiency. We also have extensions of it for a cooperative setting.
I am also interested in techniques for simplifying and solving propositional formulas, and have done work in the area of logics of knowledge and belief.