Published on Mon Sep 09 2019

Formulating Manipulable Argumentation with Intra-/Inter-Agent Preferences

Ryuta Arisaka, Makoto Hagiwara, Takayuki Ito

Developing an argumentation-theoretic model for manipulable multi-agent argumentation. Each agent may transmit deceptive information to others for tactical motives. We show how deception/honesty would alter the agent's perceived trustworthiness.

0
0
0
Abstract

From marketing to politics, exploitation of incomplete information through selective communication of arguments is ubiquitous. In this work, we focus on development of an argumentation-theoretic model for manipulable multi-agent argumentation, where each agent may transmit deceptive information to others for tactical motives. In particular, we study characterisation of epistemic states, and their roles in deception/honesty detection and (mis)trust-building. To this end, we propose the use of intra-agent preferences to handle deception/honesty detection and inter-agent preferences to determine which agent(s) to believe in more. We show how deception/honesty in an argumentation of an agent, if detected, would alter the agent's perceived trustworthiness, and how that may affect their judgement as to which arguments should be acceptable.

Fri Jun 24 2016
Artificial Intelligence
Human-Agent Decision-making: Combining Theory and Practice
Extensive work has been conducted both in game theory and logic to model strategic interaction. We will focus on automated agents that need to interact with people in two negotiation settings: bargaining and deliberation. For bargaining we will study game-theorybased equilibrium agents and for argumentation we will discuss
0
0
0
Mon May 26 2014
Artificial Intelligence
Judgment Aggregation in Multi-Agent Argumentation
Given a set of conflicting arguments, there can exist multiple plausible opinions about which arguments should be accepted, rejected, or deemed undecided. We study the problem of how multiple such judgments can be aggregated.
0
0
0
Mon Nov 05 2018
Artificial Intelligence
Knowledge and Blameworthiness
In a game with imperfect information, the coalition should have known that it had a strategy. The main technical result of the article is asound and complete bimodal logic that describes the interplay between knowledge and blameworthiness.
0
0
0
Tue Nov 19 2013
Artificial Intelligence
Reasoning about the Impacts of Information Sharing
In this paper we describe a decision process framework allowing an agent to determine what information it should reveal to its neighbours. The decision process is based on the provider's subjective beliefs about others in the system, and therefore makes extensive use of the notion of trust.
0
0
0
Sun Aug 30 2020
Artificial Intelligence
Corruption and Audit in Strategic Argumentation
strategic argumentation provides a simple model of disputation andotiation among agents. Although agents might be expected to act in our best interests, there is little that enforces such behaviour. In this paper we identify corrupt behaviours that are not detected in that formulation.
0
0
0
Sun Apr 03 2016
Artificial Intelligence
Pareto Optimality and Strategy Proofness in Group Argument Evaluation (Extended Version)
An inconsistent knowledge base can be abstracted as a set of arguments and a Defeat relation among them. Collective argument evaluation is the problem of aggregating the opinions of multiple agents. We highlight fundamental trade-offs between strategic manipulability and social optimality.
0
0
0