Merge Sort is another efficient sorting algorithm that follows the divide-and-conquer strategy. It divides the input array into two halves, recursively sorts each half, and then merges them back together.
Here’s how Merge Sort works:
Divide: Divide the unsorted array into two halves.
Conquer: Recursively sort each half.
Merge: Merge the two sorted halves into a single sorted array.
Base Case: The base case of the recursion is when the sub-array has zero or one element, in which case it is already considered sorted.
Here’s a simple implementation of Merge Sort in Python:
def merge_sort(arr):
if len(arr) <= 1:
return arr
# Divide the array into two halves
mid = len(arr) // 2
left_half = arr[:mid]
right_half = arr[mid:]
# Recursively sort each half
left_half = merge_sort(left_half)
right_half = merge_sort(right_half)
# Merge the sorted halves
return merge(left_half, right_half)
def merge(left, right):
merged = []
left_idx, right_idx = 0, 0
# Merge the two halves by comparing elements
while left_idx < len(left) and right_idx < len(right):
if left[left_idx] < right[right_idx]:
merged.append(left[left_idx])
left_idx += 1
else:
merged.append(right[right_idx])
right_idx += 1
# Add remaining elements from left and right halves
merged.extend(left[left_idx:])
merged.extend(right[right_idx:])
return merged
# Example usage:
arr = [3, 6, 8, 10, 1, 2, 1]
print("Original array:", arr)
sorted_arr = merge_sort(arr)
print("Sorted array:", sorted_arr)
In this code, merge_sort()
recursively divides the array into two halves and sorts each half using the merge()
function. The merge()
function merges two sorted arrays by comparing elements from each array and appending them to the merged
array in ascending order. Finally, the sorted halves are merged back together to form the final sorted array.
Time Complexity
The time complexity of Merge Sort is ( O(n log n) ), where ( n ) is the number of elements in the array to be sorted.
Here’s a breakdown of why Merge Sort has this time complexity:
Divide Step: The array is repeatedly divided into halves until each sub-array has only one element. This takes ( O(log n) ) time because the array is divided in half recursively until reaching the base case.
Merge Step: After the array is divided into its smallest units, the merging process begins. Merging two sorted arrays of size ( n/2 ) takes ( O(n) ) time because each element in both arrays needs to be processed once. Since there are ( log n ) levels of recursion and each level has ( O(n) ) work for the merge step, the total time complexity for the merging step is ( O(n log n) ).
Thus, the total time complexity of Merge Sort is the sum of the time taken for the divide step ( O(log n) ) and the time taken for the merge step ( O(n log n) ), which gives ( O(n log n) ) in total.
Merge Sort’s time complexity makes it highly efficient for sorting large datasets, especially in comparison to other ( O(n log n) ) sorting algorithms like Quick Sort and Heap Sort. Additionally, Merge Sort’s consistent ( O(n log n) ) performance makes it a reliable choice for sorting, regardless of the initial state of the input array.
Case | Time Complexity | Space Complexity |
---|---|---|
Best | O(n log n) | O(n) |
Average | O(n log n) | O(n) |
Worst | O(n log n) | O(n) |