Skip to content

Instantly share code, notes, and snippets.

@liamgriffiths
Last active May 6, 2019 12:01

Revisions

  1. liamgriffiths revised this gist Jan 10, 2014. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -126,6 +126,6 @@ function bogosort(arr) {
    - [Stackoverflow: Plain English explaination of Big-O](https://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o)
    - [Wikipedia: Big O Notation](https://en.wikipedia.org/wiki/Big_O_notation)
    - [Wikipedia: Time Complexity](https://en.wikipedia.org/wiki/Time_complexity#Constant_time)
    - [Big O Cheatsheet, common algorithims and their times](http://bigocheatsheet.com/)
    - [Big O Cheatsheet, common algorithms and their times](http://bigocheatsheet.com/)
    - [MIT lecture on Big O and time complexity](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-23-computational-complexity/)
    - Introduction to Algorithms [Amazon](http://www.amazon.com/Introduction-Algorithms-Electrical-Engineering-Computer/dp/0262031418) / [older HTML version](http://staff.ustc.edu.cn/~csli/graduate/algorithms/book6/toc.htm)
  2. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 2 additions and 4 deletions.
    6 changes: 2 additions & 4 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -112,13 +112,11 @@ function sum_all_pairs(arr) {

    ### O(n!), O(n * n!) - factorial or combinatorial complexity

    This is not where you want to be. Factorial complexity means that for every input n you may just have to do n! operations on each n. Algorithims with this time complexity may not be able to finish before the universe ends. An exaple of such an algorithm could be the [Travelling Salesman Problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem). A simpler example might be the [Bogosort](https://en.wikipedia.org/wiki/Bogosort).
    This is not where you want to be. Factorial complexity means that for every input n you may just have to do n! operations on each n. Algorithims with this time complexity may not be able to finish before the universe ends. An exaple of such an algorithm could be the [Travelling Salesman Problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem) via brute force searching. A simpler example might be the [Bogosort](https://en.wikipedia.org/wiki/Bogosort).

    ```javascript
    function bogosort(arr) {
    var shuffle = function(list) { return list.sort(function() { return Math.random() - 0.5; }); };
    var isSorted = function(list) {}
    while (! isSorted(arr) arr = shuffle(arr);
    while (! isSorted(arr)) arr = shuffle(arr);
    return arr;
    }
    ```
  3. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -110,7 +110,7 @@ function sum_all_pairs(arr) {
    }
    ```

    ### O(!n), O(n * n!) - factorial or combinatorial complexity
    ### O(n!), O(n * n!) - factorial or combinatorial complexity

    This is not where you want to be. Factorial complexity means that for every input n you may just have to do n! operations on each n. Algorithims with this time complexity may not be able to finish before the universe ends. An exaple of such an algorithm could be the [Travelling Salesman Problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem). A simpler example might be the [Bogosort](https://en.wikipedia.org/wiki/Bogosort).

  4. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -33,7 +33,7 @@ function in_array(val, arr) {

    ### O(n log n)

    The idea here is that you must go through each element of n plus some of each element of n again. An example of this would be the quicksort algorithim because we must touch each element in the list then again for a smaller and smaller fraction of them as we start sorting around the pivot.
    The idea here is that you must go through each element of n plus some of each element of n again. An example of this would be the [Quicksort](https://en.wikipedia.org/wiki/Quicksort) algorithim because we must touch each element in the list then again for a smaller and smaller fraction of them as we start sorting around the pivot.

    ```javascript
    function quicksort(list) {
    @@ -57,7 +57,7 @@ function quicksort(list) {

    ### O(log n) - logarithmic complexity

    The idea here is that for some input n, we only need a small fraction of n to return the result. We can skip over the vast marjority of n's. An example of this might be searching through a tree structure, where we don't have to look at a lot of the elements.
    The idea here is that for some input n, we only need a small fraction of n to return the result. We can skip over the vast marjority of n's. An example of this might be searching through a [Binary Search Tree](https://en.wikipedia.org/wiki/Binary_search_tree), where we don't have to look at a lot of the elements.

    ```javascript
    BinarySearchTree.prototype.search = function(value, start) {
  5. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 13 additions and 0 deletions.
    13 changes: 13 additions & 0 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -110,6 +110,19 @@ function sum_all_pairs(arr) {
    }
    ```

    ### O(!n), O(n * n!) - factorial or combinatorial complexity

    This is not where you want to be. Factorial complexity means that for every input n you may just have to do n! operations on each n. Algorithims with this time complexity may not be able to finish before the universe ends. An exaple of such an algorithm could be the [Travelling Salesman Problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem). A simpler example might be the [Bogosort](https://en.wikipedia.org/wiki/Bogosort).

    ```javascript
    function bogosort(arr) {
    var shuffle = function(list) { return list.sort(function() { return Math.random() - 0.5; }); };
    var isSorted = function(list) {}
    while (! isSorted(arr) arr = shuffle(arr);
    return arr;
    }
    ```
    ## Some useful links
    - [Stackoverflow: Plain English explaination of Big-O](https://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o)
  6. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -117,4 +117,4 @@ function sum_all_pairs(arr) {
    - [Wikipedia: Time Complexity](https://en.wikipedia.org/wiki/Time_complexity#Constant_time)
    - [Big O Cheatsheet, common algorithims and their times](http://bigocheatsheet.com/)
    - [MIT lecture on Big O and time complexity](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-23-computational-complexity/)
    - [Introduction to Algorithms](http://www.amazon.com/Introduction-Algorithms-Electrical-Engineering-Computer/dp/0262031418)
    - Introduction to Algorithms [Amazon](http://www.amazon.com/Introduction-Algorithms-Electrical-Engineering-Computer/dp/0262031418) / [older HTML version](http://staff.ustc.edu.cn/~csli/graduate/algorithms/book6/toc.htm)
  7. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 6 additions and 5 deletions.
    11 changes: 6 additions & 5 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -112,8 +112,9 @@ function sum_all_pairs(arr) {

    ## Some useful links

    [Stackoverflow: Plain English explaination of Big-O](https://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o)
    [Wikipedia: Big O Notation](https://en.wikipedia.org/wiki/Big_O_notation)
    [Wikipedia: Time Complexity](https://en.wikipedia.org/wiki/Time_complexity#Constant_time)
    [Big O Cheatsheet, common algorithims and their times](http://bigocheatsheet.com/)
    [MIT lecture on Big O and time complexity](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-23-computational-complexity/)
    - [Stackoverflow: Plain English explaination of Big-O](https://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o)
    - [Wikipedia: Big O Notation](https://en.wikipedia.org/wiki/Big_O_notation)
    - [Wikipedia: Time Complexity](https://en.wikipedia.org/wiki/Time_complexity#Constant_time)
    - [Big O Cheatsheet, common algorithims and their times](http://bigocheatsheet.com/)
    - [MIT lecture on Big O and time complexity](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-23-computational-complexity/)
    - [Introduction to Algorithms](http://www.amazon.com/Introduction-Algorithms-Electrical-Engineering-Computer/dp/0262031418)
  8. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 5 additions and 1 deletion.
    6 changes: 5 additions & 1 deletion big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -112,4 +112,8 @@ function sum_all_pairs(arr) {

    ## Some useful links

    [Stackoverflow: Plain English explaination of Big-O](https://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o)
    [Stackoverflow: Plain English explaination of Big-O](https://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o)
    [Wikipedia: Big O Notation](https://en.wikipedia.org/wiki/Big_O_notation)
    [Wikipedia: Time Complexity](https://en.wikipedia.org/wiki/Time_complexity#Constant_time)
    [Big O Cheatsheet, common algorithims and their times](http://bigocheatsheet.com/)
    [MIT lecture on Big O and time complexity](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-videos/lecture-23-computational-complexity/)
  9. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 1 addition and 3 deletions.
    4 changes: 1 addition & 3 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -4,9 +4,6 @@ The basic idea here is to give programmers a way to compare the performance of a

    Big O notion may consider the worst-case, average-case, and best-case running time of an algorithm.


    ![Chart of times](http://i.stack.imgur.com/Aq09a.png)

    When describing an algorithm using Big O notation you will drop the lower-oder values. This is because when the inputs become very large, the lower-order values become far less important. For example:

    | Original | Becomes |
    @@ -19,6 +16,7 @@ When describing an algorithm using Big O notation you will drop the lower-oder v

    ## Common times and examples

    ![Chart of times](http://i.stack.imgur.com/Aq09a.png)

    ### O(n) - linear time

  10. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -17,7 +17,7 @@ When describing an algorithm using Big O notation you will drop the lower-oder v
    | O(1.0001^n) | O(2^n) |


    ## Some explaination of some common times and examples
    ## Common times and examples


    ### O(n) - linear time
  11. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 10 additions and 6 deletions.
    16 changes: 10 additions & 6 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -9,12 +9,16 @@ Big O notion may consider the worst-case, average-case, and best-case running ti

    When describing an algorithm using Big O notation you will drop the lower-oder values. This is because when the inputs become very large, the lower-order values become far less important. For example:

    | Original | Becomes |
    | ------ | ------ |
    | O(2n) | O(n) |
    | O(log(n, 11) + 3) | O(log n) |
    | O(n^2.0001) | O(n^2) |
    | O(1.0001^n) | O(2^n) |
    | Original | Becomes |
    | ----------------- | --------- |
    | O(2n) | O(n) |
    | O(log(n, 11) + 3) | O(log n) |
    | O(n^2.0001) | O(n^2) |
    | O(1.0001^n) | O(2^n) |


    ## Some explaination of some common times and examples


    ### O(n) - linear time

  12. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 5 additions and 1 deletion.
    6 changes: 5 additions & 1 deletion big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -106,4 +106,8 @@ function sum_all_pairs(arr) {
    }
    return result;
    }
    ```
    ```

    ## Some useful links

    [Stackoverflow: Plain English explaination of Big-O](https://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o)
  13. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 6 additions and 6 deletions.
    12 changes: 6 additions & 6 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -9,12 +9,12 @@ Big O notion may consider the worst-case, average-case, and best-case running ti

    When describing an algorithm using Big O notation you will drop the lower-oder values. This is because when the inputs become very large, the lower-order values become far less important. For example:

    ```
    O(2n) becomes O(n)
    O(log(n, 11) + 3) becomes O(log n)
    O(n^2.0001) becomes O(n^2)
    O(1.0001^n) becomes O(2^n)
    ```
    | Original | Becomes |
    | ------ | ------ |
    | O(2n) | O(n) |
    | O(log(n, 11) + 3) | O(log n) |
    | O(n^2.0001) | O(n^2) |
    | O(1.0001^n) | O(2^n) |

    ### O(n) - linear time

  14. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 9 additions and 0 deletions.
    9 changes: 9 additions & 0 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -7,6 +7,15 @@ Big O notion may consider the worst-case, average-case, and best-case running ti

    ![Chart of times](http://i.stack.imgur.com/Aq09a.png)

    When describing an algorithm using Big O notation you will drop the lower-oder values. This is because when the inputs become very large, the lower-order values become far less important. For example:

    ```
    O(2n) becomes O(n)
    O(log(n, 11) + 3) becomes O(log n)
    O(n^2.0001) becomes O(n^2)
    O(1.0001^n) becomes O(2^n)
    ```

    ### O(n) - linear time

    For n inputs, an algorithm does roughly one operation for each input in n.
  15. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -2,7 +2,7 @@

    The basic idea here is to give programmers a way to compare the performance of algorithms at a very high level in a language/platform/architecture independent way. This becomes particularly important when dealing with large inputs.

    Big O notion considers the worst-case running time of an algorithm.
    Big O notion may consider the worst-case, average-case, and best-case running time of an algorithm.


    ![Chart of times](http://i.stack.imgur.com/Aq09a.png)
  16. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 13 additions and 13 deletions.
    26 changes: 13 additions & 13 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -49,24 +49,24 @@ function quicksort(list) {
    The idea here is that for some input n, we only need a small fraction of n to return the result. We can skip over the vast marjority of n's. An example of this might be searching through a tree structure, where we don't have to look at a lot of the elements.

    ```javascript
    BinarySearchTree.prototype.search = function(value, start) {
    var node = start || this.root;
    while (node) {
    if (value < node.value) {
    node = node.left;
    } else if (value > node.value) {
    node = node.right;
    } else if (node.value === value) {
    break;
    }
    }
    return node;
    BinarySearchTree.prototype.search = function(value, start) {
    var node = start || this.root;
    while (node) {
    if (value < node.value) {
    node = node.left;
    } else if (value > node.value) {
    node = node.right;
    } else if (node.value === value) {
    break;
    }
    }
    return node;
    };
    ```

    ### O(1) - constant time

    The idea behind O(1) is that for any input the time is exactly the same.
    The idea behind O(1) is that for any input the time is exactly the same. In other words the time of the algorithm is not dependent on the size of the input.

    ```javascript
    function is_true(bool) {
  17. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 17 additions and 16 deletions.
    33 changes: 17 additions & 16 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -22,7 +22,7 @@ function in_array(val, arr) {

    ### O(n log n)

    The idea here is that you must go through each element of n plus some of each element of n again. An example of this would be the quicksort algorithim.
    The idea here is that you must go through each element of n plus some of each element of n again. An example of this would be the quicksort algorithim because we must touch each element in the list then again for a smaller and smaller fraction of them as we start sorting around the pivot.

    ```javascript
    function quicksort(list) {
    @@ -46,30 +46,31 @@ function quicksort(list) {

    ### O(log n) - logarithmic complexity

    The idea here is that for some input n, we only need a small fraction of n to return the result. We can skip over the vast marjority of n's.
    The idea here is that for some input n, we only need a small fraction of n to return the result. We can skip over the vast marjority of n's. An example of this might be searching through a tree structure, where we don't have to look at a lot of the elements.

    ```javascript
    function find_n_in_sorted_list(n, list) {
    var middle = Math.floor(list.length / 2);
    if (middle > n) {
    for (var i = middle; i < list.length; i++) {
    if (list[i] === n) return n;
    }
    } else {
    for (var i = 0; i < middle; i++) {
    if (list[i] === n) return n;
    }
    }
    }
    BinarySearchTree.prototype.search = function(value, start) {
    var node = start || this.root;
    while (node) {
    if (value < node.value) {
    node = node.left;
    } else if (value > node.value) {
    node = node.right;
    } else if (node.value === value) {
    break;
    }
    }
    return node;
    };
    ```

    ### O(1) - constant time

    The idea behind O(1) is that for any input the time is exactly the same.

    ```javascript
    function is_number(input) {
    return input instanceof Number;
    function is_true(bool) {
    return !!bool;
    }
    ```

  18. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -25,7 +25,7 @@ function in_array(val, arr) {
    The idea here is that you must go through each element of n plus some of each element of n again. An example of this would be the quicksort algorithim.

    ```javascript
    function sort(list) {
    function quicksort(list) {
    if (list.length <= 1) return list;

    var pivot = list.splice(Math.floor(list.length / 2), 1);
    @@ -39,7 +39,7 @@ function sort(list) {
    greater.push(item);
    }
    }
    return sort(less).concat(pivot).concat(sort(greater));
    return quicksort(less).concat(pivot).concat(quicksort(greater));
    };
    ```

  19. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 3 additions and 0 deletions.
    3 changes: 3 additions & 0 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -4,6 +4,9 @@ The basic idea here is to give programmers a way to compare the performance of a

    Big O notion considers the worst-case running time of an algorithm.


    ![Chart of times](http://i.stack.imgur.com/Aq09a.png)

    ### O(n) - linear time

    For n inputs, an algorithm does roughly one operation for each input in n.
  20. liamgriffiths revised this gist Jan 9, 2014. 1 changed file with 32 additions and 1 deletion.
    33 changes: 32 additions & 1 deletion big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -19,6 +19,27 @@ function in_array(val, arr) {

    ### O(n log n)

    The idea here is that you must go through each element of n plus some of each element of n again. An example of this would be the quicksort algorithim.

    ```javascript
    function sort(list) {
    if (list.length <= 1) return list;

    var pivot = list.splice(Math.floor(list.length / 2), 1);
    var less = [], greater = [];

    while (list.length) {
    var item = list.shift();
    if (item <= pivot) {
    less.push(item);
    } else {
    greater.push(item);
    }
    }
    return sort(less).concat(pivot).concat(sort(greater));
    };
    ```


    ### O(log n) - logarithmic complexity

    @@ -51,7 +72,7 @@ function is_number(input) {

    ### O(n^2) - quadratic time

    For n inputs, an algo operates on each input close to n times.
    For n inputs, an algo operates on each input close to n times. A couple examples here might be to check for a number in common between two arrays or taking the sum of all pairs of numbers in an array.

    ```javascript
    function has_number_in_common(arr1, arr2) {
    @@ -62,4 +83,14 @@ function has_number_in_common(arr1, arr2) {
    }
    return false;
    }

    function sum_all_pairs(arr) {
    var result = [];
    for (var i = 0; i < arr.length; i++) {
    for (var j = 0; j < arr.length; j++) {
    if (i =! j) result.push(arr[i] + arr[j]);
    }
    }
    return result;
    }
    ```
  21. liamgriffiths revised this gist Jan 6, 2014. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -45,7 +45,7 @@ The idea behind O(1) is that for any input the time is exactly the same.

    ```javascript
    function is_number(input) {
    return input instaceof Number;
    return input instanceof Number;
    }
    ```

  22. liamgriffiths revised this gist Jan 6, 2014. 1 changed file with 20 additions and 3 deletions.
    23 changes: 20 additions & 3 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -2,6 +2,8 @@

    The basic idea here is to give programmers a way to compare the performance of algorithms at a very high level in a language/platform/architecture independent way. This becomes particularly important when dealing with large inputs.

    Big O notion considers the worst-case running time of an algorithm.

    ### O(n) - linear time

    For n inputs, an algorithm does roughly one operation for each input in n.
    @@ -18,7 +20,24 @@ function in_array(val, arr) {
    ### O(n log n)


    ### O(log n)
    ### O(log n) - logarithmic complexity

    The idea here is that for some input n, we only need a small fraction of n to return the result. We can skip over the vast marjority of n's.

    ```javascript
    function find_n_in_sorted_list(n, list) {
    var middle = Math.floor(list.length / 2);
    if (middle > n) {
    for (var i = middle; i < list.length; i++) {
    if (list[i] === n) return n;
    }
    } else {
    for (var i = 0; i < middle; i++) {
    if (list[i] === n) return n;
    }
    }
    }
    ```

    ### O(1) - constant time

    @@ -30,8 +49,6 @@ function is_number(input) {
    }
    ```



    ### O(n^2) - quadratic time

    For n inputs, an algo operates on each input close to n times.
  23. liamgriffiths revised this gist Jan 6, 2014. 1 changed file with 17 additions and 0 deletions.
    17 changes: 17 additions & 0 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -15,6 +15,23 @@ function in_array(val, arr) {
    }
    ```

    ### O(n log n)


    ### O(log n)

    ### O(1) - constant time

    The idea behind O(1) is that for any input the time is exactly the same.

    ```javascript
    function is_number(input) {
    return input instaceof Number;
    }
    ```



    ### O(n^2) - quadratic time

    For n inputs, an algo operates on each input close to n times.
  24. liamgriffiths created this gist Jan 6, 2014.
    31 changes: 31 additions & 0 deletions big_o_notation.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,31 @@
    # Big O notation (Asymptotic Notation)

    The basic idea here is to give programmers a way to compare the performance of algorithms at a very high level in a language/platform/architecture independent way. This becomes particularly important when dealing with large inputs.

    ### O(n) - linear time

    For n inputs, an algorithm does roughly one operation for each input in n.

    ```javascript
    function in_array(val, arr) {
    for (var i = 0; i < arr.length; i++) {
    if (arr[i] === val) return true;
    }
    return false;
    }
    ```

    ### O(n^2) - quadratic time

    For n inputs, an algo operates on each input close to n times.

    ```javascript
    function has_number_in_common(arr1, arr2) {
    for (var i = 0; i < arr1.length; i++) {
    for (var j = 0; j < arr2.length; j++) {
    if (arr1[i] === arr2[j]) return true;
    }
    }
    return false;
    }
    ```