Friday, 28 February 2014

Advantages And disadvantages of sorting

Advantages And disadvantages

Bubble Sort
Advantages
Disadvantages
The primary advantage of the bubble sort is that it is popular and easy to implement.
The main disadvantage of the bubble sort is the fact that it does not deal well with a list containing a huge number of items.
In the bubble sort, elements are swapped in place without using additional temporary storage.
The bubble sort requires n-squared processing steps for every n number of elements to be sorted.
The space requirement is at a minimum
The bubble sort is mostly suitable for academic teaching but not for real-life applications.


Insertion Sort
Advantages
Disadvantages
The main advantage of the insertion sort is its simplicity.
The disadvantage of the insertion sort is that it does not perform as well as other, better sorting algorithms
It also exhibits a good performance when dealing with a small list.
With n-squared steps required for every n element to be sorted, the insertion sort does not deal well with a huge list.
The insertion sort is an in-place sorting algorithm so the space requirement is minimal.
The insertion sort is particularly useful only when sorting a list of few items.

Selection Sort
Advantages
Disadvantages
The main advantage of the selection sort is that it performs well on a small list.
The primary disadvantage of the selection sort is its poor efficiency when dealing with a huge list of items.
Because it is an in-place sorting algorithm, no additional temporary storage is required beyond what is needed to hold the original list.
The selection sort requires n-squared number of steps for sorting n elements.
Its performance is easily influenced by the initial ordering of the items before the sorting process.
Quick Sort is much more efficient than selection sort

Advantages
Disadvantages
The quick sort is regarded as the best sorting algorithm.
The slight disadvantage of quick sort is that its worst-case performance is similar to average performances of the bubble, insertion or selections sorts.
It is able to deal well with a huge list of items.
If the list is already sorted than bubble sort is much more efficient than quick sort
Because it sorts in place, no additional storage is required as well
If the sorting element is integers than radix sort is more efficient than quick sort.

Quick Sort

Heap sort
Advantages
Disadvantages
The Heap sort algorithm is widely used because of its efficiency.
Heap sort requires more space for sorting
The Heap sort algorithm can be implemented as an in-place sorting algorithm
Quick sort is much more efficient than Heap in many cases
its memory usage is minimal
Heap sort make a tree of sorting elements.

Merge Sort

Advantages
Disadvantages
It can be applied to files of any size.
Requires extra space »N
Reading of the input during the run-creation step is sequential ==> Not much seeking.
Merge Sort requires more space than other sort.
If heap sort is used for the in-memory part of the merge, its operation can be overlapped with I/O
Merge sort is less efficient than other sort

Complexity of Sorting

Complexity Analysis
Following are the Complexities of different sorting Algorithms are given below.
Complexity of bubble sort
c ((n-1) + (n-2) + ... + 1) + k + c1(n-1)
(n-1) + (n-2) + ... + 1 = n(n-1)/2
so our function equals
c n*(n-1)/2 + k + c1(n-1) = 1/2c (n2-n) + c(n-1) + k
complexity O(n2).
Complexity of Heap sort
time complexity
` Analysis 1:
` Each callto MAXHEAPIFY costsO(lgn), and there areO(n)such
calls.
` Thus,the running time isO(nlgn). This upper bound,through
correct, is not asymptotically tight.
Complexity of Select Sort
Worst Case Analysis
Most executed instruction are those in inner for loop (if)

Each such instruction is executed (N-1) + (N-2) + ... + 2 + 1 times

Time complexity: O(N2)
Complexity of  Heap Sort
Heap sort requires:
· n steps to form the heap
· and n n 2 ( -1) log steps in the removes/siftDowns.
So the total number of steps is
n n n 2 + ( -1) log
» n n 2 log
This is the same as the average cost of quick sort.
The average and worst performance of heap sort is n n 2 log which is also the average
performance of quick sort. However, quick sort can be unstable in its performance
depending on how well it chooses a pivot. At worst quick sort takes / 2 2 steps. So
if you are not sure about the nature of the data and want guaranteed performance,
heap sort is better.
Complexity of heap sort = O( n n 2 log )
Complexity of Quick Sort
Worst-case: O(N2)
This happens when the pivot is the smallest (or the largest) element.
Then one of the partitions is empty, and we repeat recursively the procedure for N-1 elements.
Best-case O(NlogN) The best case is when the pivot is the median of the array,
and then the left and the right part will have same size.
There are logN partitions, and to obtain each partitions we do N comparisons
(and not more than N/2 swaps). Hence the complexity is O(NlogN)
Average-case - O(NlogN)
Complexity of radix sort
  • Inner loop 1 takes O(si) time where si = number of different values of type ti
  • Inner loop 2 takes O(n) time
  • Inner loop 3 times O(si) time
Thus the total time
                          =                           
              O(si +  n)
=
         O kn +   si
=
         On  +  si
  • For example, if keys are integers in the range 0 to nk - 1, then we can view the integers as radix - n integers each k digits long. Then tihas range 0  ...  n - 1 for 1  i  k.
Thus si = n and total running time is O(n)
Complexity of Shell Sort
The worst-case of your implementation is Θ(n^2) and the best-case is O(nlogn) which is reasonable for shell-sort.
The best case  O(nlogn):
The best-case is when the array is already sorted. The would mean that the inner if statement will never be true, making the inner while loop a constant time operation. Using the bounds you've used for the other loops gives O(nlogn). The best case of O(n) is reached by using a constant number of increments.
The worst case  O(n^2):
Complexity of Bucket Sort
The catch is what happens when you multiply this expression by k to get the total amount of work done. If you compute
k * (c0(n / k) + c1)
You get
c0n + c1k
As you can see, this expression depends directly on k, so the total runtime is O(n + k)
Complexity of cocktail Sort
The complexity of cocktail sort in big O notation is O(n^2) for both the worst case and the average case, but it becomes closer to O(n) if the list is mostly ordered before applying the sorting algorithm, for example, if every element is at a position that differs at most k (k ≥ 1) from the position it is going to end up in, the complexity of cocktail sort becomes O(k*n).
Complexity of Partial Sort
Partial sort does approximately (last - first) * log (middle-first) comparison
Complexity of Tim Sort
Tim sort (named after Tim Peters) is a sorting
algorithm that combines merge sort (for large sub lists)
and insertion sort (for small sub lists) to perform well
on many kinds of real-world data.
It is the sorting algorithm used in Python since 2002.
Tim sort is comparison-based, which means it will be
O(n log n) in the worst case, but in practice it
outperforms many of the O(n log n) sorting
algorithms.
It makes use of partially sorted sublists and makes its
merges very efficient.
Complexity of gnome Sort
The performance of the gnome sort algorithm is at least O(f(n)) where f(n) is the sum for each element in the input list of the distance to where that element should end up. For a "random" list of lengthL, an element at the beginning of the list is expected to be an average of L / 2 distance away from its sorted location. An element in the middle of the list is expected to be an average of L / 4 distance away from its sorted location. Since there are L total elements, f(n) is at least L * L / 4. Therefore, on average, gnome sort is O(n * n).
Complexity of Comb Sort

This is quite surprising. Despite being based on the idea of a Bubble Sort the time complexity is just O (n log n), and space complexity for in-place sorting is O(1). Amazing for such a simple sorting algorithm!

Comb Sort

Comb Sort
Comb sort is a relatively simple sorting algorithm originally designed by Włodzimierz Dobosiewicz in 1980. Later it was rediscovered by Stephen Lacey and Richard Box in 1991. Comb sort improves on bubble sort.
Class
Data structure
, where p is the number of increments[1]

Algorithm
comb_sort(list of t)
    gap = list.count
    temp as t
    swapped = false
    while gap > 1 or not swapped
        swapped = false
        if gap > 1 then
            gap = floor(gap/1.3)
        i = 0
        while i + gap < list.count
            if list(i) > list(i + gap)
                temp = list(i) // swap
                list(i) = list(i + gap)
                list(i + gap) = temp

            i += 1

Gnome Sort

Gnome Sort
Gnome sort (Stupid sort), originally proposed by Hamid Sarbazi-Azad in 2000 and called Stupid sort (not to be confused with Bogosort), and then later on described by Dick Grune and named "Gnome sort",[1] is a sorting algorithm which is similar to insertion sort, except that moving an element to its proper place is accomplished by a series of swaps, as in bubble sort. It is conceptually simple, requiring no nested loops. The running time is O , but tends towards O(n) if the list is initially almost sorted.[2] In practice the algorithm can run as fast asInsertion sort[citation needed]. The average runtime is  .
The algorithm always finds the first place where two adjacent elements are in the wrong order, and swaps them. It takes advantage of the fact that performing a swap can introduce a new out-of-order adjacent pair only right before or after the two swapped elements. It does not assume that elements forward of the current position are sorted, so it only needs to check the position directly before the swapped elements.
Class
Data structure
 auxiliary

Example
Given an unsorted array, a = [5, 3, 2, 4], the gnome sort would take the following steps during the while loop. The "current position" is highlighted in bold:
Current array
Action to take
[5, 3, 2, 4]
a[pos] < a[pos-1], swap:
[3, 5, 2, 4]
a[pos] >= a[pos-1], increment pos:
[3, 5, 2, 4]
a[pos] < a[pos-1], swap and pos > 1, decrement pos:
[3, 2, 5, 4]
a[pos] < a[pos-1], swap and pos <= 1, increment pos:
[2, 3, 5, 4]
a[pos] >= a[pos-1], increment pos:
[2, 3, 5, 4]
a[pos] < a[pos-1], swap and pos > 1, decrement pos:
[2, 3, 4, 5]
a[pos] >= a[pos-1], increment pos:
[2, 3, 4, 5]
a[pos] >= a[pos-1], increment pos:
[2, 3, 4, 5]
pos == length(a), finished.
Algorithm
procedure optimizedGnomeSort(a[])
    pos := 1
    last := 0
    while pos < length(a)
        if (a[pos] >= a[pos-1])
            if (last != 0)
                pos := last
                last := 0
            end if
            pos := pos + 1
        else
            swap a[pos] and a[pos-1]
            if (pos > 1)
                if (last == 0)
                    last := pos
                end if
                pos := pos - 1
            else
                pos := pos + 1
            end if
        end if
    end while

end procedure