2015-02-01 06:16:20 +03:00
|
|
|
---
|
|
|
|
category: Algorithms & Data Structures
|
|
|
|
contributors:
|
|
|
|
- ["Jake Prather", "http://github.com/JakeHP"]
|
|
|
|
---
|
|
|
|
|
|
|
|
# Asymptotic Notations
|
|
|
|
|
|
|
|
## What are they?
|
|
|
|
|
2015-02-01 06:22:30 +03:00
|
|
|
Asymptotic Notations are languages that allows us to analyze an algorithm's running time by
|
2015-02-01 06:16:20 +03:00
|
|
|
identifying its behavior as the input size for the algorithm increases. This is also known as
|
|
|
|
an algorithm's growth rate. Does the algorithm suddenly become incredibly slow when the input
|
2015-02-01 20:06:06 +03:00
|
|
|
size grows? Does it mostly maintain it's quick run time as the input size increases?
|
2015-02-01 06:16:20 +03:00
|
|
|
Asymptotic Notation gives us the ability to answer these questions.
|
|
|
|
|
|
|
|
## Are there alternatives to answering these questions?
|
|
|
|
|
|
|
|
One way would be to count the number of primitive operations at different input sizes.
|
|
|
|
Though this is a valid solution, the amount of work this takes for even simple algorithms
|
|
|
|
does not justify its use.
|
|
|
|
|
2015-02-01 20:06:06 +03:00
|
|
|
Another way is to physically measure the amount of time an algorithm takes to complete
|
2015-02-01 06:16:20 +03:00
|
|
|
given different input sizes. However, the accuracy and relativity (times obtained would
|
|
|
|
only be relative to the machine they were computed on) of this method is bound to
|
|
|
|
environmental variables such as computer hardware specifications, processing power, etc.
|
|
|
|
|
|
|
|
## Types of Asymptotic Notation
|
|
|
|
|
|
|
|
In the first section of this doc we described how an Asymptotic Notation identifies the
|
2015-02-01 20:06:06 +03:00
|
|
|
behavior of an algorithm as the input size changes. Let us imagine an algorithm as a function
|
2015-02-01 06:16:20 +03:00
|
|
|
f, n as the input size, and f(n) being the running time. So for a given algorithm f, with input
|
|
|
|
size n you get some resultant run time f(n). This results in a graph where the Y axis is the
|
|
|
|
runtime, X axis is the input size, and plot points are the resultants of the amount of time
|
|
|
|
for a given input size.
|
|
|
|
|
|
|
|
You can label a function, or algorithm, with an Asymptotic Notation in many different ways.
|
2015-02-01 20:06:06 +03:00
|
|
|
Some examples are, you can describe an algorithm by it's best case, worse case, or equivalent case.
|
|
|
|
The most common is to analyze an algorithm by it's worst case. This is because if you determine an
|
2015-02-01 06:16:20 +03:00
|
|
|
algorithm's run time or time complexity, by it's best case, what if it's best case is only obtained
|
|
|
|
given at a low, unrealistic, input size? It is equivalent to having a 5 meter sprinting race.
|
|
|
|
That isn't the best measurement.
|
|
|
|
|
|
|
|
### Types of functions, limits, and simplification
|
2015-02-01 06:22:30 +03:00
|
|
|
```
|
2015-02-01 06:16:20 +03:00
|
|
|
Logarithmic Function - log n
|
|
|
|
Linear Function - an + b
|
|
|
|
Quadratic Function - an^2 + bn + c
|
|
|
|
Polynomial Function - an^z + . . . + an^2 + a*n^1 + a*n^0, where z is some constant
|
|
|
|
Exponential Function - a^n, where a is some constant
|
2015-02-01 06:22:30 +03:00
|
|
|
```
|
2015-02-01 06:16:20 +03:00
|
|
|
|
|
|
|
These are some basic function growth classifications used in various notations. The list starts at the least
|
|
|
|
fast growing function (logarithmic) and goes on to the fastest growing (exponential). Notice that as 'n', or the input,
|
|
|
|
increases in each of those functions, the result clearly increases much quicker in quadratic, polynomial, and
|
|
|
|
exponential, compared to logarithmic and linear.
|
|
|
|
|
|
|
|
One extremely important note is that for the notations about to be discussed you should do your best to use simplest terms.
|
|
|
|
This means to disregard constants, and lower order terms, because as the input size (or n in our f(n)
|
|
|
|
example) increases to infinity (mathematical limits), the lower order terms and constants are of little
|
|
|
|
to no importance. That being said, if you have constants that are 2^9001, or some other ridiculous
|
|
|
|
unimaginable amount, realize that simplifying will skew your notation accuracy.
|
|
|
|
|
|
|
|
Since we want simplest form, lets modify our table a bit...
|
2015-02-01 06:22:30 +03:00
|
|
|
```
|
2015-02-01 06:16:20 +03:00
|
|
|
Logarithmic - log n
|
|
|
|
Linear - n
|
|
|
|
Quadratic - n^2
|
|
|
|
Polynomial - n^z, where z is some constant
|
|
|
|
Exponential - a^n, where a is some constant
|
2015-02-01 06:22:30 +03:00
|
|
|
```
|
2015-02-01 06:16:20 +03:00
|
|
|
|
|
|
|
### Big-Oh
|
|
|
|
Big-Oh, commonly written as O, is an Asymptotic Notation for the worst case, or ceiling of growth
|
|
|
|
for a given function. Say f(n) is your algorithm runtime, and g(n) is an arbitrary time complexity
|
|
|
|
you are trying to relate to your algorithm. f(n) is O(g(n)), if for any real constant c (c>0),
|
|
|
|
f(n) <= c g(n) for every input size n (n>0).
|
|
|
|
|
2015-02-01 20:13:22 +03:00
|
|
|
*Example 1*
|
|
|
|
```
|
|
|
|
f(n) = 3log n + 100
|
2015-02-01 06:16:20 +03:00
|
|
|
g(n) = log n
|
2015-02-01 20:13:22 +03:00
|
|
|
```
|
2015-02-01 06:16:20 +03:00
|
|
|
|
2015-02-01 20:13:22 +03:00
|
|
|
is f(n) O(g(n))?
|
|
|
|
is 3 log n + 100 O(log n)?
|
|
|
|
Let's look to the definition of Big-Oh.
|
|
|
|
3log n + 100 <= c * log n
|
|
|
|
Is there some constant c that satisfies this for all n?
|
|
|
|
3log n + 100 <= 150 * log n, n > 2 (undefined at n = 1)
|
2015-02-01 06:16:20 +03:00
|
|
|
Yes! The definition of Big-Oh has been met therefore f(n) is O(g(n)).
|
|
|
|
|
2015-02-01 20:13:22 +03:00
|
|
|
*Example 2*
|
|
|
|
```
|
|
|
|
f(n) = 3*n^2
|
2015-02-01 06:16:20 +03:00
|
|
|
g(n) = n
|
2015-02-01 20:13:22 +03:00
|
|
|
```
|
2015-02-01 06:16:20 +03:00
|
|
|
|
2015-02-01 20:13:22 +03:00
|
|
|
is f(n) O(g(n))?
|
|
|
|
is 3*n^2 O(n)?
|
|
|
|
Let's look at the definition of Big-Oh.
|
|
|
|
3*n^2 <= c * n
|
|
|
|
Is there some constant c that satisfies this for all n?
|
2015-02-01 06:16:20 +03:00
|
|
|
No there isn't, f(n) is NOT O(g(n)).
|
|
|
|
|
|
|
|
### Big-Omega
|
|
|
|
Big-Omega, commonly written as Ω, is an Asymptotic Notation for the best case, or a floor growth rate
|
|
|
|
for a given function.
|
|
|
|
|
|
|
|
f(n) is Ω(g(n)), if for any real constant c (c>0), f(n) is >= c g(n) for every input size n (n>0).
|
|
|
|
|
|
|
|
Feel free to head over to additional resources for examples on this. Big-Oh is the primary notation used
|
|
|
|
for general algorithm time complexity.
|
|
|
|
|
2015-02-01 06:22:30 +03:00
|
|
|
### Ending Notes
|
|
|
|
It's hard to keep this kind of topic short, and you should definitely go through the books and online
|
2015-02-01 06:16:20 +03:00
|
|
|
resources listed. They go into much greater depth with definitions and examples.
|
|
|
|
More where x='Algorithms & Data Structures' is on it's way; we'll have a doc up on analyzing actual
|
|
|
|
code examples soon.
|
|
|
|
|
|
|
|
## Books
|
|
|
|
|
|
|
|
* [Algorithms](http://www.amazon.com/Algorithms-4th-Robert-Sedgewick/dp/032157351X)
|
|
|
|
* [Algorithm Design](http://www.amazon.com/Algorithm-Design-Foundations-Analysis-Internet/dp/0471383651)
|
|
|
|
|
|
|
|
## Online Resources
|
|
|
|
|
|
|
|
* [MIT](http://web.mit.edu/16.070/www/lecture/big_o.pdf)
|
2015-02-01 06:22:30 +03:00
|
|
|
* [KhanAcademy](https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation)
|