Entropy of composite systems

Some of the properties of the von Neumann entropy for composite systems

are similar to those of Shannon entropy, while some others are quite different.

We discuss a few here.

1. Concavity: S(ρ) is a concave function. That is, for a linear combination

of states ρ = c

1

ρ

A

+ c

2

ρ

B

, the resulting entropy is usually greater than the

weighted sum of the individual entropies:

S(ρ) ≥ c

1

S

ρ

A

+ c

2

S

ρ

B

. (11.35)

The physical interpretation is that as when two systems are mixed, the re-

sultant is more uniform than each of the individual systems. To prove this,

you need to remember that the logarithm is not a linear function. It is, in

fact, a concave function (look at the graph of Figure 11.2). This also means

that the function x log x is concave. In the basis {|ii} in which ρ is diagonal,

ρ

i

= hi|ρ|ii. Let’s introduce the notation ρ

A

i

= hi|ρ

A

|ii etc.

Proof.

ρ

i

log ρ

i

≥ c

1

ρ

A

i

log ρ

A

i

+ c

2

ρ

B

i

log ρ

B

i

=⇒ S(ρ) = −

X

i

ρ

i

log ρ

i

≥ −

X

i

(c

1

ρ

A

i

log ρ

A

i

+ c

2

ρ

B

i

log ρ

B

i

)

= c

1

S(ρ

A

) + c

2

S(ρ

B

).

2. Quantum relative entropy. Suppose that {|ii} and {|mi} are two sets

of orthogonal bases for the Hilbert space of the system. For density operators

ρ =

X

i

p

i

|iihi|; σ =

X

m

q

m

|mihm|,

Characterization of Quantum Information 227

we can define the relative entropy as

S(ρ k σ) = Tr

ρ(log ρ − log σ)

. (11.36)

In evaluating this quantity, we find that it is always non-negative: a result

sometimes known as Klein’s inequality.

Proof.

S(ρ k σ) =

X

i

hi|ρ(log ρ − log σ)|ii

=

X

i

p

i

log p

i

− p

i

hi|log σ|ii. (11.37)

Here, hi|log σ|ii = hi|

X

m

log q

m

|mihm|ii

=

X

m

log q

m

P

im

(11.38)

where P

im

≡ hi|mihm|ii (11.39)

≥ 0;

X

i

P

im

= 1 =

X

m

P

im

(11.40)

(Such a matrix is called doubly stochastic.)

So, S(ρ k σ) =

X

i

p

i

log p

i

X

m

P

im

log q

m

#

(11.41)

=

X

i,m

p

i

P

im

log

p

i

q

m

(since

X

m

P

im

= 1)

X

i,m

p

i

P

im

1 −

q

m

p

i

(since log x ≥ 1 −

1

x

)

= 0, (using Eq 11.41)

=⇒ S(ρ k σ) ≥ 0. (11.42)

3. Subadditivity. Given two systems A and B with joint state ρ

AB

, and

reduced density matrices ρ

A

and ρ

B

, the joint entropy defined simply as

S(ρ

AB

) ≡ −Trρ

AB

log ρ

AB

(11.43)

satisfies

S(ρ

AB

) ≤ S(ρ

A

) + S(ρ

B

), (11.44)

with equality only when the two systems are uncorrelated. Thus entanglement

reduces the entropy, i.e., increases the information, of the system.

228 Introduction to Quantum Physics and Information Processing

Proof. The proof follows as an application of Klein’s inequality for ρ = ρ

AB

and σ = ρ

A

⊗ ρ

B

. Suppose |ii and |mi are bases for the Hilbert spaces of A

and B, respectively. From Klein’s inequality,

S(ρ

AB

) ≤ −Trρ

AB

log(ρ

A

⊗ ρ

B

)

= −Trρ

AB

log ρ

A

− Trρ

AB

log ρ

B

Now the first term in this is

−hi, m|ρ

AB

log ρ

A

|i, mi = −Tr

A

ρ

A

log ρ

A

= S(ρ

A

).

Similarly for the other term. So we have

S(ρ

AB

) ≤ S(ρ

A

) + S(ρ

B

) (11.45)

There is another result, the triangle inequality also known as the Araki–

Lieb inequality, that can be similarly proved:

S(ρ

AB

) ≥ |S(ρ

A

) − S(ρ

B

)|. (11.46)

4. Conditional entropy.

S(A|B) ≡ S(ρ

AB

) − S(ρ

B

). (11.47)

While Shannon conditional entropy can never be negative, the von Neumann

entropy can, for systems that are entangled [16]. This can be proved to be a

criterion for entanglement.

There are many more inequalities and properties of the von Neumann

entropy that can be proved, for which we refer you to Nielsen and Chuang

[50], the book by Ohya and Petz [51] and the review article by Wehrl [71].


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *