You are here
Home > Complexity Theory > Complexity Theory – Calculates Complexity of Problem

Complexity Theory – Calculates Complexity of Problem

Complexity Theory Calculate Growth of Functions

Complexity theory is a central topic in theoretical computer science. It has direct applications to computability theory and uses computation models such as Turing machines to help test complexity.

Complexity theory helps computer scientists relate and group problems together into complexity classes. Sometimes, if one problem can be solved, it opens a way to solve other problems in its complexity class. Complexity helps determine the difficulty of a problem, often measured by how much time and space (memory) it takes to solve a particular problem. For example, some problems can be solved in polynomial amounts of time and others take exponential amounts of time, with respect to the input size.

Growth Rates of Functions

The complexity of computational problems can be discussed by choosing a specific abstract machine as a model of computation and considering how much time and/or space machines of that type require for the solutions.In order to compare two problems it is necessary to look at instances of different sizes. Using the criterion of runtime, for example, the most common approach is to compare the growth rates of two runtimes, each viewed as a function of the instance size.

Definition : Notation for comparing growth rates

Suppose ƒ1, ƒ2: Ν → Ν are partial functions, and each is defined at all but a finite number of points. We write,

ƒ1(n)=Ο(ƒ2(n))

or simply ƒ1=Ο(ƒ2), if there are constants and n0 so that for every n ≥ n0, ƒ1(n) and ƒ2(n) are defined and ƒ1(n) ≤ Cƒ2(n). We write

ƒ1(n)=Θ2(n))

to mean that ƒ1=Ο(ƒ2) and ƒ2=Ο(ƒ1). Finally,

ƒ1(n)=ο(ƒ2(n))

or ƒ1=ο2), means that for every positive constant C, there is a constant n0 sot that for every n ≥ n0, ƒ1(n) ≤ Cƒ2(n).

The statements ƒ1=Ο(ƒ2), ƒ1(n)=Θ(ƒ2) and ƒ1=ο(ƒ2are read “ƒ1 is big-oh of ƒ2“, “ƒ1 is big-theta of ƒ2” and “ƒ1 is little-oh of ƒ2“, respectively.

All these  statements can be rephrased in terms of the ratio  ƒ1(n)/ƒ2(n), provided that ƒ2(n) is eventually grater than 0. Saying that ƒ1=ο2means that the limit of this ratio as approaches infinity is 0; the statement ƒ1=Ο(ƒ2) means only the ratio is bounded. If ƒ12), and both functions are eventually nonzero,, then both the ratios ƒ12 and ƒ21 are bounded, which is the same as saying that the ratio ƒ12 must stay between two fixed positive values ( or is “approximately constant”).

If the statement ƒ1=Ο(ƒ2fails, we write ƒ1 ≠ Ο(ƒ2), and similarly for the other two. Saying that ƒ1 ≠ Ο(ƒ2) means that it is impossible to find a constant C so that ƒ1(n) ≤ Cƒ2(n) for all sufficiently large n; in other words, that the ratio of ƒ1(n)/ƒ2(n) is unbounded. This means that although the ratio ƒ1(n)/ƒ2(n) may not be large for all large values of n, it is large for infinitely many values of n.

Here we can say both in theory and in practice, complexity theory helps computer scientists determine the limits of what computers can and cannot do.

 

Top