In this tutorial, you will learn how the divide and **conquer algorithm functions**. We will likewise compare the divide and conquer approach versus other approaches to solve a recursive problem.

The following are some standard algorithms that follows Divide and Conquer algorithm.

**Binary Search**is a searching algorithm. In each progression, the algorithm contrasts the input element x and the value of the center element in an array. On the off chance that the values coordinate, return the index of the center. Something else, in the event, that x is not exactly the center element, at that point the algorithm recurs for the left side of the center element, else recurs for the right side of the center element.**Quicksort**is an arranging algorithm. The algorithm picks the pivot element, rearranges the array elements so that all elements less than the picked piot component move to the left side of piot, and all more prominent components move to the right side. At long last, the calculation recursively sorts the subarrays on the left and right of the pivot element.**Merge Sort**is additionally an arranging calculation. The algorithm partitions the array into two parts recursively sorts them lastly combines the two arranged parts.**Closest Pair of Points**The issue is to locate the closest pair of points in a bunch of focuses in the x-y plane. The issue can be solved in O(n^2) time by calculating the distances of each pair of focuses and contrasting the distances with locating the base. The Divide and Conquer calculation takes care of the issue in O(nLogn) time.**Strassen’s Algorithm**is a proficient algorithm to multiply two matrices. A basic technique to multiply two matrices need 3 nested loops and is O(n^3). Strassen’s algorithm multiplies two matrices in O(n^2.8974) time.**Cooley–Tukey Fast Fourier Transform (FFT) algorithm**is the most well-known calculation for FFT. It is a divide and conquer algorithm which works in O(nlogn) time.**Karatsuba algorithm for quick multiplication**it does the multiplication of two n-digit numbers in at most

A divide and conquer algorithm is a strategy of solving a large problem by

- breaking the problem into smaller sub-problems
- solving the sub-problems, and
- combining them to get the desired output.

To use the divide and conquer algorithm, **recursion** is used. Learn about recursion in different programming languages:

Contents

## How Divide and Conquer Algorithms Work?

Here are the steps in involved:

**Divide**: Divide the given issue into sub-issues using recursion.**Conquer**: Solve the smaller sub-issues recursively. On the off chance that the subproblem is sufficiently little, at that point tackle it straightforwardly.**Combine**: Combine the solutions of the sub-issues that are part of the recursive process to take care of the actual issue.

Let us understand this concept with the help of an example.

- Let the given array be:

2. Divide the array into two halves.

Once more, divide each subpart recursively into two halves until you get individual elements.

Presently, combine the individual components in an arranged way. Here, **conquer** and **combine** steps go side by side.

## Time Complexity

The complexity of the divide and conquer algorithm is calculated using the master theorem.

T(n) = aT(n/b) + f(n), where, n = size of input a = number of subproblems in the recursion n/b = size of each subproblem. All subproblems are assumed to have the same size. f(n) = cost of the work done outside the recursive call, which includes the cost of dividing the problem and cost of merging the solutions

Let us take an example to find the time complexity of a recursive problem.

For a merge sort, the equation can be written as:

T(n) = aT(n/b) + f(n) = 2T(n/2) + O(n) Where, a = 2 (each time, a problem is divided into 2 subproblems) n/b = n/2 (size of each sub problem is half of the input) f(n) = time taken to divide the problem and merging the subproblems T(n/2) = O(n log n) (To understand this, please refer to the master theorem.) Now, T(n) = 2T(n log n) + O(n) ≈ O(n log n)

## Divide and Conquer Vs Dynamic approach

The divide and conquer approach divides an issue into smaller subproblems; these subproblems are additionally solved recursively. The consequence of each subproblem isn’t stored for future reference, while, in a dynamic approach, the result of each subproblem is stored for future reference.

USe the divide and conquer approach when the equivalent subproblem isn’t solved multiple times. Use the dynamic approach when the result of a subproblem is to be used multiple times later on.

Let us understand this with an example. Suppose we are trying to find the Fibonacci series. Then,

**Divide and Conquer approach:**

fib(n) If n < 2, return 1 Else , return f(n - 1) + f(n -2)

**Dynamic approach:**

mem = [] fib(n) If n in mem: return mem[n] else, If n < 2, f = 1 else , f = f(n - 1) + f(n -2) mem[n] = f return f

In a dynamic approach, mem stores the result of each subproblem.

## Advantages of Divide and Conquer Algorithm

The complexity for the multiplication of two matrices using the native method is O(n^{3}), though using the divide and conquer approach (for example Strassen’s matrix multiplication) is O(n^{2.8074}). This approach likewise streamlines different issues, for example, the Tower of Hanoi.

This approach is reasonable for multiprocessing frameworks.

It uses memory caches.

Thanks for reading! We hope you found this tutorial helpful and we would love to hear your feedback in the Comments section below. And show us what you’ve learned by sharing your photos and creative projects with us.