Using the RAM model of computations, we can count how many steps our algorithm takes on any given input instance by executing it. However, to understand how good or bad an algorithm is in general, we must know how it works over all instances.
To understand the notation of the best, worst and average case complexity, think about running an algorithm over all possible instances of data that can be fed to it. For the problem of sorting, the set of possible instances consists of all possible arrangement of \(n\) keys, over all possible values of \(n\). We can represent each input instance as a point on a graph as shown below:
Where the x-axis represents the size of the input problem (for sorting, the number of items to sort), and the y-axis denotes the number of steps taken by algorithm in this instance. These points naturally align themselves in a column because only integers represent possible input size. We can define three interesting functions over the plot of three points:
1. The Worst-Case Complexity
The worst case complexity of algorithm is the function defined by the maximum number of steps taken in any instance of size n. This represents the curve passing through the highest point in each column.
2. The Best-Case Complexity
The best-case complexity of the algorithm is the function defined by the minimum number of steps taken in any instance of size n. This represents the curve passing through the lowest point of each column.
3. The Average-Case Complexity
The average-case complexity of the algorithm, which is the function defined by the average number of steps over all instances of size \(n\).
The worst-case complexity proves to be most useful of these three measures in practice. Each of these three complexities defines a numerical function, representing time versus problem size. These functions are as well defined as any other numerical function, be it \(y = x^2 – 2x + 1\) or the price of Google stock as a function of time. But time complexities are such complicated function that we must simplify them to work with them. For this we need asymptotic notation.
Why we needs Asymptotic Notation
The best, worst and average-case time complexities for any given algorithm are numerical functions over the size of possible problem instances. However it is very difficult to work precisely with these function, because they tends to:
Have too many bumps:
An algorithm such as binary search typically runs always a bit faster for array of size exactly \(n = 2^k – 1\) (where \(k\) is an integer), because the array partitions work out nicely. This detail is not particularly significant, but it warns us that the exact time complexity function for any algorithm is liable to be very complicated, with little up and down bumps as shown in below figure.
Require too much detail to specify precisely:
Counting the exact number of RAM instructions executed in the worst case requires the algorithm be specified to the detail of a complete computer program. Further, the precise answer depends upon uninteresting coding detail. Performing a precise worst case analysis like
$$T(n) = 12754n^2 + 4353n + 834 \log_2 n + 13546$$
would clearly a very difficult work, but provide us little extra information than the observation that the time grows quadratically with \(n\).
It proves to be much easier to talk in terms of simple upper and lower bounds of time complexity function using the asymptotic notation. Asymptotic notations simplifies our analysis by ignoring the levels of detail that do not impact on our comparison of algorithms.
Asymptotic notation ignores the difference between multiplicative constants. Function \(f(n) = 2n\) and \(g(n) = n\) are identical in asymptotic analysis. This make sense given our application. Suppose a given algorithm in (say) C language ran twice as fast as one with the same algorithm written in Java. This multiplicative factor of two tells us nothing about the algorithm itself, since both programs implements exactly the same algorithm. We ignore such constants when comparing two algorithms.
The formal definition associated with the asymptotic notations are as follows:
Big Oh (\(\mathcal O\)) Notation:
\(f(n) = \mathcal O(g(n))\) means \(c.g(n)\) is an upper bound of \(f(n)\). Thus there exists some constant \(c\) such that \(f(n)\) is always \(\le c.g(n)\), for large enough \(n\) (i.e., \(n \ge n_0\) for some constant \(n_0\)).
Big Omega (\(\Omega\)) Notation:
\(f(n) = \Omega (g(n))\) means \(c.g(n)\) is a lower bound of \(f(n)\). Thus there exists some constant \(c\) such that \(f(n)\) is always \(\ge c.g(n)\), for large enough \(n\) (i.e., \(n \ge n_0\) for some constant \(n_0\)).
Big Theta (\(\Theta\)) Notation:
\(f(n) = \Theta (g(n))\) means \(c_1.g(n)\) is an upper bound of \(f(n)\) and \(c_2.g(n)\) is a lower bound of \(f(n)\) for all \(n \ge n_0\). Thus there exists constant \(c_1\) and \(c_2\) such that \(f(n) \le c_1.g(n)\) and \(f(n) \ge c_2.g(n)\). This means that \(g(n)\) provides a nice, tight bound on \(f(n)\).
These definition are illustrated in below figure:
Each of these definitions assumes a constant \(n_0\) beyond which they are always satisfied. We are not concerned with small value of \(n\) (anything to the left of \(n_0\)). After all, we don’t really care whether one sorting algorithm sorts six item faster than another, but seek which algorithm proves faster when sorting \(10,000\) or \(1,000,000\). Asymptotic notations enable us to ignore details and focus on the big picture.
This article is contributed by Ram Kripal. If you like eLgo Academy and would like to contribute, you can mail your article to firstname.lastname@example.org. See your article appearing on the eLgo Academy page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.