Distributed constraint optimization

Jump to navigation Jump to search

Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is either minimized or maximized.

Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct participants (agents). The constraints are described on some variables with predefined domains, and have to be assigned to the same values by the different agents.

Problems defined with this framework can be solved by any of the algorithms that are proposed for it.

The framework was used under different names in the 1980s. The first known usage with the current name is in 1990.

Definitions

DCOP

A DCOP can be defined as a tuple ${\displaystyle \langle A,V,{\mathfrak {D}},f,\alpha ,\eta \rangle }$, where:

The objective of a DCOP is to have each agent assign values to its associated variables in order to either minimize or maximize ${\displaystyle \eta (f)}$ for a given assignment of the variables.

Context

A Context is a variable assignment for a DCOP. This can be thought of as a function mapping variables in the DCOP to their current values:

${\displaystyle t:V\leftarrow (D\in {\mathfrak {D}})\cup \{\emptyset \}.}$

Note that a context is essentially a partial solution and need not contain values for every variable in the problem; therefore, ${\displaystyle t(v_{i})\mapsto \emptyset }$ implies that the agent ${\displaystyle \alpha (v_{i})}$ has not yet assigned a value to variable ${\displaystyle v_{i}}$. Given this representation, the "domain" (i.e., the set of input values) of the function f can be thought of as the set of all possible contexts for the DCOP. Therefore, in the remainder of this article we may use the notion of a context (i.e., the ${\displaystyle t}$ function) as an input to the ${\displaystyle f}$ function.

Example problems

Distributed graph coloring

The graph coloring problem is as follows: given a graph ${\displaystyle G=\langle N,E\rangle }$ and a set of colors ${\displaystyle C}$, assign each vertex, ${\displaystyle n\in N}$, a color, ${\displaystyle c\in C}$, such that the number of adjacent vertices with the same color is minimized.

As a DCOP, there is one agent per vertex that is assigned to decide the associated color. Each agent has a single variable whose associated domain is of cardinality ${\displaystyle |C|}$ (there is one domain value for each possible color). For each vertex ${\displaystyle n_{i}\in N}$, create a variable in the DCOP ${\displaystyle v_{i}\in V}$ with domain ${\displaystyle D_{i}=C}$. For each pair of adjacent vertices ${\displaystyle \langle n_{i},n_{j}\rangle \in E}$, create a constraint of cost 1 if both of the associated variables are assigned the same color:

${\displaystyle (\forall c\in C:f(\langle v_{i},c\rangle ,\langle v_{j},c\rangle )\mapsto 1).}$

The objective, then, is to minimize ${\displaystyle \eta (f)}$.

Distributed multiple knapsack problem

The distributed multiple- variant of the knapsack problem is as follows: given a set of items of varying volume and a set of knapsacks of varying capacity, assign each item to a knapsack such that the amount of overflow is minimized. Let ${\displaystyle I}$ be the set of items, ${\displaystyle K}$ be the set of knapsacks, ${\displaystyle s:I\leftarrow \mathbb {N} }$ be a function mapping items to their volume, and ${\displaystyle c:K\leftarrow \mathbb {N} }$ be a function mapping knapsacks to their capacities.

To encode this problem as a DCOP, for each ${\displaystyle i\in I}$ create one variable ${\displaystyle v_{i}\in V}$ with associated domain ${\displaystyle D_{i}=K}$. Then for all possible context ${\displaystyle t}$:

${\displaystyle f(t)\mapsto \sum _{k\in K}{\begin{cases}0&r(t,k)\leq c(k),\\r(t,k)-c(k)&{\text{otherwise}},\end{cases}}}$

where ${\displaystyle r(t,k)}$ is a function such that

${\displaystyle r(t,k)=\sum _{v_{i}\in t^{-1}(k)}s(i).}$

Algorithms

DCOP algorithms can be classified according to the search strategy (best-first search or depth-first branch-and-bound search), the synchronization among agents (synchronous or asynchronous), the communication among agents (point-to-point with neighbors in the constraint graph or broadcast) and the main communication topology (chain or tree).[3] ADOPT, for example, uses best-first search, asynchronous synchronization, point-to-point communication between neighboring agents in the constraint graph and a constraint tree as main communication topology.

Algorithm Name Year Introduced Memory Complexity Number of Messages Correctness (computer science)/
Completeness (logic)
Implementations
NCBB
No-Commitment Branch and Bound[4]
2006 Polynomial (or any-space[5]) Exponential Proven Reference Implementation: not publicly released
DPOP
Distributed Pseudotree Optimization Procedure[6]
2005 Exponential Linear Proven Reference Implementation: FRODO (GNU Affero GPL)
OptAPO
Asynchronous Partial Overlay[7]
2004 Polynomial Exponential Proven, but proof of completeness has been challenged[8] Reference Implementation: Template:Cite web

DCOPolis (GNU LGPL); In Development

Adopt
Asynchronous Backtracking[9]
2003 Polynomial (or any-space[10]) Exponential Proven Reference Implementation: Adopt
Secure Multiparty Computation For Solving DisCSPs
(MPC-DisCSP1-MPC-DisCSP4){{ safesubst:#invoke:Unsubst
$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} 2003 {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Note: secure if 1/2 of the participants are trustworthy {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

Secure Computation with Semi-Trusted Servers{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} 2002 {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Note: security increases with the number of trustworthy servers {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

ABTR{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Asynchronous Backtracking with Reordering 2001 {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Note: eordering in ABT with bounded nogoods {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

DMAC{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Maintaining Asynchronously Consistencies 2001 {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Note: the fastest algorithm {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

AAS{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Asynchronous Aggregation Search 2000 {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} aggregation of values in ABT {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

DFC{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Distributed Forward Chaining 2000 {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Note: low, comparable to ABT {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

DBA
Distributed Breakout Algorithm
1995 {{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

Note: incomplete but fast FRODO version 1
AWC{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Asynchronous Weak-Commitment 1994 {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Note: reordering, fast, complete (only with exponential space) {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

ABT{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Asynchronous Backtracking 1992 {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

{{ safesubst:#invoke:Unsubst $B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Note: static ordering, complete {{ safesubst:#invoke:Unsubst$B=

{{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

CFL
Co-ordination-Free Learning[11]
2013 Linear None Proven

Hybrids of these DCOP algorithms also exist. BnB-Adopt,[3] for example, changes the search strategy of Adopt from best-first search to depth-first branch-and-bound search.

Notes and references

1. {{#invoke:citation/CS1|citation |CitationClass=citation }}
2. {{#invoke:citation/CS1|citation |CitationClass=citation }}
3. {{#invoke:citation/CS1|citation |CitationClass=citation }}
4. {{#invoke:citation/CS1|citation |CitationClass=citation }}
5. {{#invoke:citation/CS1|citation |CitationClass=citation }}
6. {{#invoke:citation/CS1|citation |CitationClass=citation }}
7. The originally published version of Adopt was uninformed, see {{#invoke:citation/CS1|citation |CitationClass=citation }}. The original version of Adopt was later extended to be informed, that is, to use estimates of the solution costs to focus its search and run faster, see {{#invoke:citation/CS1|citation |CitationClass=citation }}. This extension of Adopt is typically used as reference implementation of Adopt.
8. {{#invoke:citation/CS1|citation |CitationClass=citation }}
9. {{#invoke:citation/CS1|citation |CitationClass=citation }}

Books and surveys

• {{#invoke:citation/CS1|citation

|CitationClass=citation }} A chapter in an edited book.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }} See Chapters 1 and 2; downloadable free online.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• Yokoo, M., and Hirayama, K. (2000). Algorithms for distributed constraint satisfaction: A review. Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 185–207). A survey.