In
theoretical computer science, multiparty communication complexity is the study of
communication complexity
In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first intro ...
in the setting where there are more than 2 players.
In the traditional two–party
communication game, introduced by , two players, ''P''
1 and ''P''
2 attempt to compute a Boolean function
:
Player ''P''
1 knows the value of ''x''
2, ''P''
2 knows the value of ''x''
1, but ''P''
''i'' does not know the value of ''x''
''i'', for ''i'' = 1, 2.
In other words, the players know the other's variables, but not their own. The minimum number of bits that must be communicated by the players to compute ''f'' is the
communication complexity
In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first intro ...
of ''f'', denoted by ''κ''(''f'').
The multiparty communication game, defined in 1983, is a powerful generalization of the 2–party case: Here the players know all the others' input, except their own. Because of this property, sometimes this model is called "numbers on the forehead" model, since if the players were seated around a round table, each wearing their own input on the forehead, then every player would see all the others' input, except their own.
The formal definition is as follows:
players:
intend to compute a Boolean function
:
On set
of variables there is a fixed partition
of
classes
, and player
knows every variable, ''except'' those in
, for
. The players have unlimited computational power, and they communicate with the help of a blackboard, viewed by all players.
The aim is to compute
), such that at the end of the computation, every player knows this value. The cost of the computation is the number of bits written onto the blackboard for the given input
and partition
. The cost of a multiparty protocol is the maximum number of bits communicated for any
from the set
''n'' and the given partition
. The
-party communication complexity,
of a function
, with respect to partition
, is the minimum of costs of those
-party protocols which compute
. The
-party symmetric communication complexity of
'' is defined as
:
where the maximum is taken over all ''k''-partitions of set
.
Upper and lower bounds
For a general upper bound both for two and more players, let us suppose that ''A''
1 is one of the smallest classes of the partition ''A''
1,''A''
2,...,''A''
''k''. Then ''P''
1 can compute any Boolean function of ''S'' with , ''A''
1, + 1 bits of communication: ''P''
2 writes down the , ''A''
1, bits of ''A''
1 on the blackboard, ''P''
1 reads it, and computes and announces the value
. So, the following can be written:
:
The Generalized Inner Product function (GIP)
[.] is defined as follows:
Let
be
-bit vectors, and let
be the
times
matrix, with
columns as the
vectors. Then
is the number of the all-1 rows of matrix
, taken modulo 2. In other words, if the vectors
correspond to the
characteristic vectors of
subsets of an
element base-set, then GIP corresponds to the
parity
Parity may refer to:
* Parity (computing)
** Parity bit in computing, sets the parity of data for the purpose of error detection
** Parity flag in computing, indicates if the number of set bits is odd or even in the binary representation of the r ...
of the intersection of these
subsets.
It was shown
that
:
with a constant ''c'' > 0.
An upper bound on the multiparty communication complexity of GIP shows that
:
with a constant ''c'' > 0.
For a general Boolean function ''f'', one can bound the multiparty communication complexity of ''f'' by using its ''L''
1 norm as follows:
[.]
:
Multiparty communication complexity and pseudorandom generators
A construction of a pseudorandom number generator was based on the BNS lower bound for the GIP function.
{{Reflist
Applied mathematics
Information theory