in , ,

Why Learn Data Structures and Algorithms?

Why Learn Data Structures and Algorithms
Why Learn Data Structures and Algorithms

In this tutorial, we will learn why every programmer should learn data structures and algorithms with the assistance of examples.

Numerous youthful programmers scour the web attempting to discover answers to this inquiry: How to study Algorithm and Data structure? Unquestionably, a good place to start.

In this article, you will learn-

What are algorithms and data structures, and why should I study them?

This tutorial is for individuals who have quite recently begun learning algorithms and thought about how significant it will be to boost their career/programming skills. It is additionally for the individuals who can’t help thinking about why large organizations like Google, Facebook, and Amazon hire programmers who are astoundingly good at optimizing Algorithms.

What are Algorithms?

Casually, an algorithm is only a notice of steps to take care of an issue. They are basically a solution.

For instance, an algorithm to take care of the issue of factorials may look something like this:

Problem: Find the factorial of n

Initialize fact = 1
For every value v in range 1 to n:
    Multiply the fact by v
fact contains the factorial of n

Here, the algorithm is written in English. In the event that it was written in a programming language, we would call it to code instead. Here is a code for finding the factorial of a number in C++.

int factorial(int n) {
    int fact = 1;
    for (int v = 1; v <= n; v++) {
        fact = fact * v;
    return fact;

Programming is all about data structures and algorithms. Data structures are used to hold data while algorithms are used to take care of the issue using that data.

Data structures and algorithms (DSA) experience answers for standard issues in detail and gives you an insight into how efficient it is to use every last one of them. It additionally shows you the study of assessing the proficiency of an algorithm. This enables you to pick the best of different decisions.

Use of Data Structures and Algorithms to Make Your Code Scalable

Time is precious.

Assume, Haroon and Sohail are attempting to tackle the basic issue of finding the sum of the initial 1011 natural numbers. While Sohail was composing the algorithm, Haroon actualized it demonstrating that it is as basic as criticizing Imran khan.

Algorithm (by Sohail)

Initialize sum = 0
for every natural number n in range 1 to 1011 (inclusive):
    add n to sum
sum is your answer

Code (by Haroon)

int findSum() {
    int sum = 0;
    for (int v = 1; v <= 100000000000; v++) {
        sum += v;
    return sum;

Haroon and Sohail are feeling euphoric in themselves that they could assemble something of their own right away. How about we sneak into their workspace and tune in to their discussion.

Haroon: Let's run this code and find out the sum.
Sohail: I ran this code a few minutes back but it's still not showing the output. What's wrong with it?

Uh oh, something went wrong! A computer is the most deterministic machine. Returning and attempting to run it again won’t help. So how about we break down what’s going on with this straightforward code.

Two of the most important assets for a computer program are time and memory.

The time taken by the computer to run code is:

Time to run code = number of instructions * time to execute each instruction

The quantity of guidelines relies upon the code you used, and the time taken to execute each code relies upon your machine and compiler.

For this situation, the all out number of guidelines executed (suppose x) are

x = 1 + (1011 + 1) + (1011) + 1, which is x = 2 * 1011 + 3

Let us assume that a computer can execute y = 108 instructions in one second (it can vary subject to machine configuration). The time taken to run the above code is

Time taken to run code = x/y (greater than 16 minutes)

Is it conceivable to advance the algorithm so Haroon and Sohail don’t need to hang tight for 16 minutes each time they run this code?

I’m certain that you already guessed the correct strategy. The amount of first N natural numbers is given by the formula:

Sum = N * (N + 1) / 2

Converting it into code will look something like this:

int sum(int N) {
    return N * (N + 1) / 2;

This code executes in only one guidance and completes the errand no matter of what the worth is. Let it alone more prominent than the all out number of atoms in the universe. It will discover the outcome in a matter of moments.

The time taken to take care of the issue, for this situation, is 1/y (which is 10 nanoseconds). Coincidentally, the combined response of a nuclear bomb takes 40-50 ns, which implies your program will finish effectively even of whether somebody throws a nuclear bomb on your computer simultaneously you ran your code. 🙂

Note: Computers take a couple of directions (not 1) to figure multiplication and division. I have said 1 just for the sake of simplicity..

More on Scalability

Scalability is scale plus ability, which implies the quality of an algorithm/system to deal with the issue of a bigger size.

Think about the issue of setting up a study hall for 50 students. Probably the least difficult arrangement is to book a room, get a blackboard, a couple of chalks, and the issue is tackled.

But what if the size of the problem increases? What if the number of students increased to 200?

The solution actually holds however it needs more assets. For this situation, you will likely need a lot bigger room, a projector screen, and a digital pen.

What if the number of students increased to 1000?

The solution fails s or uses a ton of assets when the size of the issue increases. This implies, your solution wasn’t scalable.

What is a scalable solution then?

Consider a site like Khanacademy, a large number of students can see videos, read answers simultaneously and no more assets are required. Thus, the solution can take care of the issues of bigger size under asset crunch.

In the event that you see our first solution to find the sum of the first N natural numbers, it wasn’t scalable. This is on the grounds that it required linear growth in time with the linear growth in the size of the issue. Such algorithms are otherwise called linearly scalable algorithms.

Our second solution t was entirely scalable and didn’t need the use of any more opportunity to take care of an issue of bigger size. These are known as steady time algorithms.

Memory is costly

Memory isn’t generally accessible in bounty. While managing code/system which expects you to store or create a lot of data, it is basic for your algorithm to save the usage of memory at every possible opportunity. For instance: While storing data about individuals, you can save memory by storing just their age not the date of birth. You can generally compute it on the fly usinng their age and current date.

Examples of an Algorithm’s Efficiency

Here are a few instances of what realizing algorithms and data structures enable you to do:

Example 1: Age Group Problem

Issues like finding the individuals of a particular age group can undoubtedly be solved with a little altered adaptation of the binary search algorithm (assuming that the data is stored).

The naive algorithm which experiences all the people individually, and checks in the event that it falls in the given age bunch is straight scalable. Though, parallel pursuit claims itself to be a logarithmically scalable algorithm. This implies that if the size of the issue is squared, the time taken to solve it is just doubled.

Assume, it requires 1 second to locate all the individuals at a specific age for a group of 1000. At that point for a group of 1 million individuals,

  • the binary search algorithm will require just 2 seconds to take care of the issue
  • the native algorithm may require 1 million seconds, which is around 12 days

A similar binary search algorithm is used to locate the square root of a number.

Example 2: Rubik’s Cube Problem

Envision you are composing a program to discover the solution of a Rubik’s block.

This charming looking pizzle has annoyingly 43,252,003,274,489,856,000 positions, and these are simply positions! Envision the number of ways one can take to arrive at some wrong positions.

Luckily, the best approach to take care of this issue can be represented by the graph data structure. There is a graph algorithm known as Dijkstra’s algorithm which allows you to take care of this issue in linear time. Yes, you heard it right. It implies that it allows you to arrive at the solved situation in a base number of states.

Example 3: DNA Problem

DNA is a molecule that conveys genetic data. They are comprised of more smaller units which are addressed by Roman characters A, C, T, and G.

Envision yourself working in the field of bioinformatics. You are assigned the work of discovering the event of a specific example in a DNA strand.

It is a renowned issue in computer science academia. And, the simplest algorithm takes the time proportional to

(number of character in DNA strand) * (number of characters in pattern)

A typical DNA strand has millions of such units. Eh! worry not. KMP algorithm can get this done in time which is proportional to

(number of character in DNA strand) + (number of characters in pattern)

The * operator replaced by + makes a lot of changes.

considering that the pattern was of 100 characters, your algorithm is presently multiple times quicker. In the event that your example was of 1000 characters, the KMP algorithm would be just about multiple times quicker. That is, on the off chance that you had the option to discover the event of example in 1 second, it will presently take you only 1 ms. We can likewise place this in another manner. Rather than coordinating 1 strand, you can coordinate 1000 strands of comparable length simultaneously.

What’s more, there are infinite such stories…

Final Words

By and large, software development includes learning new innovations consistently. You will learn the vast majority of these innovations while using them in one of your projects. Nonetheless, it isn’t the situation with algorithms.

On the off chance that you don’t know calculations well, you won’t have the option to distinguish on the off chance that you can streamline the code you are composing at this moment. You are required to realize them ahead of time and apply them at every possible opportunity and basic.

We explicitly discussed the scalability of algorithms. A software system comprises numerous such algorithms. Optimizing any of them leads to being better system.

Nonetheless, note that this isn’t the best way to make a framework scalable. For instance, a procedure known as distributed computing allows free pieces of a program to rush to various machines together making it considerably more scalable.

Thanks for reading! We hope you found this tutorial helpful and we would love to hear your feedback in the Comments section below. And show us what you’ve learned by sharing your photos and creative projects with us.

salman khan

Written by worldofitech

Leave a Reply

What is an Algorithm?

What is an Algorithm?

Asymptotic Analysis: Big-O Notation and More

Asymptotic Analysis: Big-O Notation and More