Druckansicht der Internetadresse:

Charles University        Institute of Philosophy

SEGA

Print page

Abstracts for Project Workshop in Bayreuth, March 30 - 31, 2017

Thomas Ågotnes: True Lies

The relationship between truth and aggregation (or other forms of communication) is fickle: the truth value of a formula may change as a result of being communicated. The most well-known example is the Moore formula, a formula that becomes false when truthfully announced. In this talk I will introduce the "dual" notion of true lies, lies that becomes true when announced. In a logic of announcements, where the announcing agent is not modelled, a true lie is a formula (that is false and) that becomes true when announced. We investigate true lies and other types of interaction between announced formulas, their preconditions and their postconditions, in the setting of a logic of believed announcements, wherein agents may have or obtain incorrect beliefs. Our results are on the satisfiability and validity of instantiations of these semantically defined categories, on iterated announcements, including arbitrarily often iterated announcements, and on syntactic characterization. We close with results for iterated announcements in the logic of knowledge (instead of belief), and for lying as private announcements (instead of public announcements) to different agents. Detailed examples illustrate our lying concepts. The talk is joint work with Hans van Ditmarsch and Yanjing Wang.

Zoé Christoff: Stability in Binary Opinion Diffusion

This is joint work with Davide Grossi (University of Liverpool).
We study the logical form of opinion diffusion stabilization in influence networks. First, we show how diffusion processes can be represented in binary judgment aggregation and can equivalently be studied through neighborhood structures. This allows us to obtain a general characterization of stabilization in terms of neighborhood properties.
We then show that this notion of stabilization can be captured by the modal mu-calculus interpreted on monotone neighborhoods. Finally, we illustrate the scope of this general logical form by making explicit what it corresponds to when restricted to well-known specific examples of diffusion models.

Nina Gierasimczuk: Merging Towards the Truth: An Overview

Striving for true beliefs is one of the most important issues in the field of belief revision. Adjusting one’s beliefs in the light of new information can be viewed as a learning process. The policies driving such changes  vary however with respect to their truth-tracking capacities. Recent work in combining dynamic logics for beliefs and belief-change with formal learning theory sheds light on this issue [1,2,3]. On the other hand, the setting of dynamic epistemic logic allows formalising dynamic operations of belief merge [4]. In this talk I will explore the possibilities of applying the learning theoretic setting to the iterated belief merge operations. The overall goal is to see under what conditions belief merge operations can facilitate truth-tracking. This is joint work with Zoé Christoff.

1. Gierasimczuk; Bridging Learning Theory and Dynamic Epistemic Logic; Synthese 169, 2009.
2. Gierasimczuk; Knowing One’s Limits; PhD Thesis, ILLC UvA, 2010.
3. Baltag, Gierasimczuk and Smets; Belief Revision as a Truth-Tracking Process; in: TARK, 2011.
4. Baltag and Smets; Protocols for Belief Merge: Reaching Agreement via Communication;  Logic Journal of the IGPL 21(3), 2013.

Umberto Grandi: Opinion diffusion as aggregation

Classical models of opinion diffusion on social networks are based on simple representations of opinons such as 0-1 decisions or real numbers. Existing diffusion models are therefore not well-suited to deal with complex or qualitative representations of individual opinions such as preferences (e.g., linear or weak orders over a set of alternatives), qualitative beliefs, or binary views over interconnected issues. In this talk I will survey two models of opinion diffusion based grounded on techniques from preference and judgment aggregation. Both models are discrete time iterative processes, where at every step one or more individuals perform an opinion update by aggregating the opinions of her influencers defined by the network. I will present a number of results that investigate the termination of the iterative processes, depending on the topology of the network and on the aggregation procedure used, as well as the properties of the opinion profiles at  termination (consensus, "aligned" profiles...).

Norbert Gratzl/Ondrej Majer/Olivier Roy: Distributed Beliefs for Non-Standard Reasoners

We study the theory of distributed beliefs for groups whose members might have inconsistent or incomplete attitudes. We first look at distributed belief in neighborhood semantics generally, and provide a sound, complete and cut-free labeled calculus for this modality. We then provide a foundation for this type of non-normal distributed beliefs in terms of aggregation of non-standard probabilities.

Marcel Kiel: Judgment Aggregation and Minimal Change - A Model for Reaching a Consensus by Revising Beliefs

When a group of agents attempts to reach an agreement on certain issues, it is usually desirable that the resulting consensus be as close as possible to the original judgments of the individuals. However, when these judgments are logically connected to further beliefs, the notion of closeness should also take into account to what extent the individuals would have to revise their entire belief set to reach an agreement. In this work, we present a model for generation of agreement with respect to a given agenda which allows individual epistemic entrenchment to influence the value of the consensus. While the postulates for the transformation function and their construction resemble those of AGM belief revision, the notion of an agenda is adapted from the theory of judgment aggregation. This allows our model to generalize both frameworks.

Dominik Klein: A metric based approach to multi agent belief revision

It is long since known that classic AGM belief revision stands in tight connection to a variety of distance based approaches. While classic belief revision works on a propositional language, there have been some recent attempts to generalize the framework to languages with modal operators. Existing work in this realm, however, has so far been limited to special cases. In this paper, we introduce a general metric-based belief revision method for modal languages.

This paper falls in two parts. In the first part, we will introduce a new family of metrics on the set of Kripke models. More specifically, we give a recipe for defining metrics on the space of pointed Kripke Models, tailored to the needs and interests of the user. We then show various formal results about the structure of the resulting topological space. We show, for instance, that the metric turns the space of pointed Kripke models into a Stone space. In the second half of the talk we show
how the metrics defined naturally translate into belief revision functions. We explore certain properties of these belief revision functions. In particular, we show, that our approach generalizes a recent (Caridroit et al, 2016) family of distance based approaches for belief revision on modal languages.

Michal Peliš: Dynamic epistemic erotetic logic

In the book [Pelis, chapter 4] I suggested answer mining in a group of agents for distributive knowledge. However, it was without asking-and-answering `mechanism’. Now, it is done for single-agent variant epistemic erotetic logic (introduced in [1]) in the paper [2]. It would be nice to prepare a similar approach for multi-agent version (here, we have just a sketch).

Adam Přenosil: Contradictory information as a basis for rational belief

In this talk, we shall try to provide a formal definition of what it means for a belief to be rational given a potentially contradictory body of information. In particular, we shall assume that the contradictions in our evidence are a result of random noise and try to reconstruct the most probable states of affairs given such distorted evidence. This will yield a theory which in a way forms the best consistent approximation to the inconsistent information available to us. The associated consequence relation turns to be closely related to Priest's so-called minimally inconsistent Logic of Paradox. We also discuss the potential applications of this approach to belief revision theory.

Vít Punčochář and Igor Sedlár: Substructural logics with distributed knowledge

In this paper we will extend the language of substructural logics with the modality of distributed knowledge. We will provide a general framework that allows us to add distributed knowledge to systems that extend the logic known as Full Lambek. We will also provide a semantics and a sound and complete deductive calculus for Full Lambek with distributed knowledge.

Igor Sedlár: Merging mutually inconsistent inputs

We provide a simple formalisation of merging mutually inconsistent inputs (beliefs, plans, etc.) in a group of agents. ​​The formalisation modifies the standard modal approach based on intersections of accessibility relations. We add an explicit representation agents' willingness to compromise when facing contradictory inputs from other agents ​​in a group. Instead of a single accessibility relation for each agent, we use a finite number of relations forming a chain with respect to set inclusion.​ ​

Igor Sedlár: Pooling information in substructural epistemic logics

In my "Substructural epistemic logics" (J Appl Non-Class Log, 25, 2015) I formulate a framework that combines normal modal epistemic logic with substructural logics. The framework represents bodies of information available to agents explicitly (as states in relational models for substructural logics) and lifts the assumption that such bodies are closed under classical inference rules. In this talk I will explore some ways how pooling information available to agents in a group can be represented in the framework. Relations with similar frameworks presented at the workshop will be discussed as well.

Marija Slavkovik: Iterative judgment aggregation

Judgment aggregation problems form a class of collective decision-making problems represented in an abstract way, subsuming some well known problems such as voting. A collective decision can be reached in many ways, but a direct one-step aggregation of individual decisions is arguably most studied. Another way to reach collective decisions is by iterative consensus building – allowing each decision-maker to change their individual decision in response to the choices of the other agents until a consensus is reached. Iterative consensus building has so far only been studied for voting problems. Here we propose an iterative judgment aggregation algorithm, based on movements in an undirected graph, and we study for which instances it terminates with a consensus. We also compare the computational complexity of our itterative procedure with that of related judgment aggregation operators.

Facebook Twitter Youtube-Kanal Instagram Blog Kontakt