The basic idea here is to give programmers a way to compare the performance of algorithms at a very high level in a language/platform/architecture independent way. This becomes particularly important when dealing with large inputs.
Big O notion considers the worst-case running time of an algorithm.
For n inputs, an algorithm does roughly one operation for each input in n.
function in_array(val, arr) {
for (var i = 0; i < arr.length; i++) {
if (arr[i] === val) return true;
}
return false;
}
The idea here is that for some input n, we only need a small fraction of n to return the result. We can skip over the vast marjority of n's.
function find_n_in_sorted_list(n, list) {
var middle = Math.floor(list.length / 2);
if (middle > n) {
for (var i = middle; i < list.length; i++) {
if (list[i] === n) return n;
}
} else {
for (var i = 0; i < middle; i++) {
if (list[i] === n) return n;
}
}
}
The idea behind O(1) is that for any input the time is exactly the same.
function is_number(input) {
return input instaceof Number;
}
For n inputs, an algo operates on each input close to n times.
function has_number_in_common(arr1, arr2) {
for (var i = 0; i < arr1.length; i++) {
for (var j = 0; j < arr2.length; j++) {
if (arr1[i] === arr2[j]) return true;
}
}
return false;
}