First an explanation about big O notation. If you have *f*(*x*) = 6*x*^{4} − 2*x*^{3} + 5. Then to simplify the equation, as x approaches infinity, the only number that seems to matter is *x*^{4} as all other parts of the equation become more and more insignificant as x grows.

This means we can say *f*(*x*) is almost equal to *x*^{4} or *f*(*x*) = *O*(*x ^{4}*).

In sorting, For typical sorting algorithms good behavior is and bad behavior is . The ideal is Ideal behavior for a sort is , but since comparison is needed in programming algorithms, we have to settle for at least O(n log n).

Popular sorting algoriths:

- Bubble sort: Starts at the beginning of the data set, compares the first two elements and swaps (if neccesary) the lowest element first. Repeats the same comparison until the end of the list, and then starts again in element 2. Highly unefficient. A 100 elements array will need about 1ooo comparison tasks. Bubble sort average case and worst case are both O(
*n*²). Here’s an example:

function myBubbleSort(arrayName,length) {

for (var i=0; i<(length-1); i++){ for (var j=i+1; j - Insertion sort: Efficient in small lists, but expensive, because it creates a new array to accomodate all elements of the unsorted array there in order as the insertion happens.
- Shell sort: one of the fastest algorithms to sort data, but very unefficient in large data sets. It arrange the data sequence in a two dimensional array, and then in the vertical columns of the array it uses intersion sort to the columns. The you decrease the number of columns and do it again.
- Merge sort. Divides the data set into lists of two elements (1 and 2, 3 and 4, etc), sort the lists, and then create new lists of 4 with the resulting lists, sort them again and create lists of 8, etc, until all list is sorted. Scales well with large lists because its worse case scenario is O(
*n*log*n*). - Heap sort. It puts the data set in a data structure called heap, which is a binary tree where the smallest (or largest) element is the root, and childen of each node are also smaller than their parent. When a node is removed, the heap updates itself again to reflect that rule. Heaps run on
*O(n log n).* *n*log*n*) performance, if not we may get closer to O(*n*²), yet we need to O(n) operation to determine the median, so that also adds up to the execution time.

```
```