# How to analyze mixture of algorithms? Merge + Insertion sort, theoretical analysis

I am trying to analyze the complexity (both theoretical and empirical analysis. I read that in the real world use-case (by my prof) that we use a mixture of merge and insertion sort. When the sub-arrays are small enough (how small?), we will use insertion sort on the smaller sub-arrays.

I understand that for Merge sort, it is O(n log n) for general cases whereas for Insert Sort, it is O(n) for best case, O(n^2) for worst case.

How can I do a theoretical analysis on this mixture of algorithms?

As I understand for empirical analysis, I can simply feed the program a collection of dataset and measure the number of comparisons & swaps.
My code:

``````#include <iostream>

int comparisons = 0;
int swaps = 0;

void mergesort(int x(), int l, int r);

void insertionSort(int x(),int size,int start);
int main() {
int x() = {9,5,1,4,3,10,29,69,5,2};

// insertionSort(x,10);

mergesort(x, 0, 9);

for(int i =0;i<10;i++){
std::cout << x(i) << " ";
}
std::cout << "nSWAPS: " << swaps;

}

void insertionSort(int x(), int size,int start){

for(int i =start; i < size;i++){

for (int j= i; j!= 0;j--){
if(x(j) < x(j-1)){

int temp = x(j-1);
x(j-1) = x(j);
x(j) = temp;
swaps++;
}
else{
break;
}

}
}

}

void mergesort(int x(), int l, int r) {

if (l >= r)
return;

int mid = (l + r) / 2;

if(r - l  < 5){

insertionSort(x, r-l+1,l);

}else{

mergesort(x, l, mid);
mergesort(x, mid + 1, r);

}

int i = l;
int j = mid + 1;
int k = 0;

int tmp(r - l + 1);

while (i <= mid && j <= r) {

if (x(i) >= x(j)) {
tmp(k) = x(j);
j++;
} else {
tmp(k) = x(i);
i++;
}
swaps++;

k++;

}

while (i <= mid) {

tmp(k) = x(i);
i++;
k++;

}

while (j <= r) {

tmp(k) = x(j);
j++;
k++;

}

for (i = 0; i < k; i++) x(l + i) = tmp(i);

}
``````

Posted on Categories Articles