Monday, October 25, 2004

Big O notation - Matters of notation

Big O notation - Wikipedia, the free encyclopedia:
The statement "f(x) is O(g(x))" as defined above is often written as f(x) = O(g(x)). This is a slight abuse of notation: we are not really asserting the equality of two functions. Here is a bad example:

O(x) = O(x2) but O(x2) ≠ O(x)

By this reason, some authors prefer a set notation and write f ∈ O(g), thinking of O(g) as the set of all functions dominated by g.
Furthermore, an "equation" of the form
f(x) = h(x) + O(g(x))
is to be understood as "the difference of f(x) and h(x) is O(g(x))".

In casual use, O is commonly used where Θ is meant, i.e., a tight estimate is implied. For example, one might say "heapsort is O(n log n) in average case" when the intended meaning was "heapsort is Θ(n log n) in average case". Both statements are true, but the latter is a stronger claim.

**) Here is a hint (and mnemonics) why Landau selected these Greek letters: "omicron" is "o-micron", i.e., "o-small", whereas "o-mega" is "o-BIG".


No comments: