Asymptotic Analysis ( O , Ω , Θ , o )


Asymptotic Analysis ( O , Ω , Θ , o )

ASYMPTOTIC ANALYSIS:

In an attempt to gain expressions for the best, average, and worst cases for all these three cases we need to identify the upper and lower bounds. To represent, these upper and lower bounds we need some kind of syntax, thus for this Asymptotic notation is a tool for measuring the growth rate of functions, which for program design usually means the way in which the time or space costs of a program scale with the size of the input.

For all the three notations: worst-case, best-case and average-case we can precisely comprehend(understand) that in each case for a given function f(n) we always try to find another function namely g(n) which approximates the function f(n) at higher values of n, that means g(n) is a curve which approximates a for f(n) at higher values of n.

In mathematics, we call such a curve as an asymptotic curve. In other terms g(n) is an asymptotic curve for f(n) for this reason we call algorithm analysis as an asymptotic analysis.


ALGORITHMS EVALUATION CRITERIA:

The complexity of an algorithm M(assumed) is the function f(n) which gives the running time and/or storage space requirement of the algorithm in terms of the size ‘n’ of the input data. Mostly, the storage space required by an algorithm is simply a multiple of the data size ‘n’. Complexity shall be referred to as the running time of the algorithm.

The function f(n), gives the running time of an algorithm, depends not only on the size ‘n’ of the input data but also on the particular data sets/items. The complexity function f(n) for certain cases are:

1. Best Case: This is the type of complexity of solving the problem for the best input of size n. Also, the minimum possible value of f(n) is called the best case.

2. Average Case: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size n.

3. Worst Case: This is the complexity of solving the problem for the worst input of size n. The maximum value of f(n) for any key possible input.


Rate of Growth(In-Shorts):

1. Big–Oh (O), f(n) = O(g(n)),(pronounced order of or big oh), says that the growth rate of f(n) is less than or equal that of g(n).Gives tight upper bound.



2. Big–Omega(Ω), f(n) = Ω(g(n)),(pronounced omega), says that the growth rate of f(n) is greater than or equal to that of g(n). Gives tight lower bound.



3. Big–Theta (Θ), f(n) = Θ(g(n)),(pronounced theta), says that the growth rate of f(n) equals (=) the growth rate of g(n) [if f(n) = O(g(n)) and f(n) = Ω (g(n)].Gives asymptotic tight bound.



4. Little–Oh(o), f(n) = o(g(n)),(pronounced little oh), says that the growth rate of f(n) is less than the growth rate of g(n) [if f(n) = O(g(n)) and f(n) ≠ Θ (g(n))].


Post a Comment

0 Comments