O-notation is difficult to understand, particularly when we consider its formal underpinnings. However, for this class, it is enough to know:
O-notation is made more confusing by various abuses of its original meaning. For example, it is fairly standard to refer to an algorithm or program as being O(g) for some function g, even though it really only makes sense to say one mathematical function is big-oh of another mathematical function.
Suppose f is a C++ function. When we say "f is O(g)" we mean, for input to f of size n (roughly speaking), f takes no more than g(n) operations, ignoring constant factors.
Formal definitions of big O(g) allow for other abuse. Most algorithms we will ever consider are O(n!). However, by describing them as being O(n!) we are not reporting much information. Therefore, we try to use O-notation for as "tight" as bounds as possible. That is if f is O(g), we know of no better (asymptotically smaller) upper bound than g.
As an example, consider a function for inserting an item at the front of a list implemented with arrays:
void List::insert(int entry) { for(size_t i=n; i>0; i--) data[i] = data[i-1]; data[0] = entry; used++; }In the worst-case, the function executes each of the following statements exactly once:
i=n; data[0] = entry; used++;and each of the following statements n-1 times:
i<0; data[i]=data[i-1]; i--;We might write the complexity as being 3(n-1) + 3 = 3n. However, we'd prefer just to write it as O(n). O(1) is wrong. O(3n) is correct, but not as simple as O(n). O(n2 ) isn't wrong, but it isn't "tight" like O(n). Thus, we stick with O(n) as the response to the question, "What is the worst-case run-time complexity of insert in terms of O-notation?"
O(1) | fast |
O(lg(n)) | fast |
O(n) | fast for some things, slow for others |
O(n*lg(n)) | fast - compared to O(n2 ) |
O(n2 ) | slow for most (but not all) things |
O(2n ) | INTRACTABLE |
O(n!) | INTRACTABLE |