Resources and Computational Complexity

We now come to the issues that make computer scientists look to the quan-

tum paradigm for answering some of their questions on efficient algorithms.

Efficiency is quantified in terms of how the resources used for the compu-

tation scale with the number of input bits n. Typically, polynomial scaling

is termed “efficient” while exponential scaling is not. Resources are typically

time, space and energy, though the last one is less a theoretical concern than

for the physical implementation.

When analyzing the efficiency of an algorithm, it is desirable to factor out

dependencies on the kind of computer the algorithm may be implemented on.

The resulting features are to be intrinsic to the mathematical problem itself,

and are defined in terms of how they scale as a function of the input size,

rather than in absolute terms. These behaviors are termed complexity.

Time complexity is the most commonly considered aspect of efficiency of

algorithms, and can be quantified by the number of elementary steps, such as

the addition of two numbers, in the execution of an algorithm. Space complex-

ity can be quantified by the amount of memory to be allocated to the execution

of the algorithm. While the actual complexity of a problem depends on the

particular algorithm used and also the size of the input, we try to generalize

the concept by considering the asymptotic behavior, as the input size becomes

very large.

Computational complexity is often quantified in three different ways. The

way in which an algorithm scales as n is expressed in the following ways for

large n:

1. O(g(n)) (big oh): which specifies that the function g(n) is the upper

bound on the behavior of a resource;

2. Ω(g(n)) (big omega): specifies that the function g(n) is the lower bound

on the behavior of a resource;

3. Θ(g(n)) (big theta): this is the strongest condition, when a given resource

scales as both O(g(n)) and Ω(g(n)) with the same function g(n).

The first type, O which gives the upper bound, is the most commonly used.

Example 6.4.1. Let’s look at the time complexity of simple arithmetic oper-

ations.

• Addition of two n-bit integers takes exactly n steps and has complexity

Θ(n).

• Multiplication of two n-bit integers by the usual brute force method

Computation Models and Computational Complexity 117

takes n − 1 additions and at most n carries. Thus the complexity is

O(n

2

).

• Matrix multiplication of two n ×n matrices takes n multiplications and

n additions, and therefore is of complexity O(n × n

2

) = O(n

3

). If the

matrices are not square, but m × n and n × l then the complexity is

O(mnl).

Box 6.2: Complexity Classes

There exists a plethora of complexity classes in this vast and deep subject

and we list some of the more important ones here. These complexity classes

assume a Turing model for the computer.

• P (Polynomial time): This class contains problems that are solvable in

polynomial time, that is they are of O(n

k

) for some k, on a deterministic

Turing machine.

• NP (Non-deterministic polynomial time): this is the class of decision

problem (with only “yes” or “no” answers) for which, given a solution, it

can be verified in polynomial time in a non-deterministic Turing machine

model. It is yet an unsolved problem as to whether an NP problem can

be solved in polynomial time. Examples include integer factorization and

discrete logarithm.

• coNP consists of decision problems whose complement is in NP.

• NP-complete (NPC) is the class of problems containing the hardest

problem in NP. This class includes problems which may be outside NP.

Examples are the Knapsack problem, the traveling salesman problem,

Boolean satisfiability problem.

In some situations, especially in the quantum algorithms we are going to

study in this book, we talk of the “query complexity” of an algorithm. Here,

the algorithm is reduced to a series of binary answers to a query made to a

function evaluator looked upon as a black box, whose functioning is unknown

to us. This is calculated as the number of times the black box has to be queried

to get to the solution. Of course, if the black box is replaced by a “white box”:

the details of the circuit used to implement the function, then we can relate

the query complexity to the actual computational complexity of the entire

process


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *