Big-O notation

Photo by Alexander Red on Unsplash

Photo by Alexander Red on Unsplash

How to quantify the performance of data structures algorithms.
Updated 6. April 2025

Data structures and algorithms are selected based on their performance in specific scenarios. In this article, I will introduce the concepts and teach you how to calculate the worst-case performance of your algorithms and data structures, aka big-O notation.

Please note that for simplicity's sake, and to focus on the theme of this article, I have decided not to include any error handling or perform any input validation in the examples below. 

In a real-world program, exception handling and input validation are essential.

What is DSA?

Why should you care?

A data structure is how the data being worked on is stored in memory. An algorithm is the way the data is being processed. The combination of a data structure and an algorithm is what makes up a program. The efficiency of a program is the combination of a data structure and an algorithm used to solve a specific problem.

Because of this, various DSA (data structures and algorithms) are used and optimized for different problems. A "best" DSA does not exist. The optimal DSA depends on the problem being solved.

Any fool can write a program, but a great engineer builds excellent software that is easy to understand, scalable, and efficient.

Knowing and identifying the performance of different data structures and algorithms are essential skills if you want to develop scalable and effective programs. By having deep insight and experience in DSA, you will quickly identify bottlenecks in a system, and be able to avoid building inefficient solutions for problems.

DSA is a skill that is often thought of as theoretical and boring for beginners. But as you start building more complex software, you will begin to notice the difference between software written using proper DSA, and software written with improper DSA. A bad choice of DSA will not be noticeable on small amounts of data, but it becomes painfully apparent on large amounts of data. Any fool can write a program, but a great engineer builds excellent software that is easy to understand, scalable, and efficient.

A good choice of DSA for a problem could be compared to how Eliyahu M. Goldratt describes the process of winning Nobel Prizes in the sciences in his book Beyond the Goal. "The person who wins is not the one who writes the most fancy or complex publication, it's the one who writes a simple one that is so obvious that everyone else in the scientific community wonders why they did not see it themselves."

Choosing the correct DSA for a problem is a skill that requires both repetition and theoretical insights. In the following chapters, we will look into some basic methods of comparing different DSAs using Big-O notation.

Time complexity

How much time is spent

The time complexity of an algorithm and data structure is relative to how much time the algorithm and data structure take based on the input. An algorithm may be shorter, equal to, or greater than the amount of data being processed.

O(1)

Constant time

This is the time complexity where the DSA takes the same amount of time no matter how large the data is. A basic example for this is a function that returns the first element in an array.

findFirstItem.js
function findFirstItem(array){
    return array[0]; //O(1) time
}

The function above will take exactly the same time to execute regardless of how long the array is.

O(n)

Linear time

This is the time complexity where the execution time depends on the length of the input. A basic example is a function that calculates the sum of all elements in an array.

sumArray.js
function sumArray(array){
    let sum = 0;
    array.forEach(element => {
       sum += element; //O(n) time
    });

    return sum;
}

The function above will loop through all the elements in the input array, and add the value to the sum variable. When the loop is completed, the sum will be returned. Because of this, the execution time of the function will be proportional to the length of the array. Processing an array with 20 elements will take twice as long as processing an array with 10 elements.

A more practical example of linear time is a linear search, where we try to find an element in an array.

needleInHaystack.js
function needleInHaystack(needle, haystack){
    for(let i = 0; i < haystack.length; i++){
        if (haystack[i] == needle) { //O(n) time
            return true;
        }
    }
    return false;
}

The function above will loop through the haystack array, and look for the needle. If the needle is found, the function will return true; if the needle is not found, the function will return false. 

Since big-O notation is based on the worst-case scenario, you must assume that the needle was either not found, or that the needle was in the last element in the array. Because of this, the algorithm's time complexity is proportional to the array's length. O(n).

O(n2)

Quadratic time

This is the time complexity where the algorithm will be square to the input size. A basic example is an algorithm that prints all combinations of all the inputs, including itself, in an array.

logAllPairs.js
function logAllPairs(array){
    for(let i = 0; i < array.length; i++){
        for(let j = 0; j < array.length; j++){
            console.log(i, j); //O(n^2) time
        }
    }
}

The function above will loop through the entire array for each item in the array. The total number of loops on an array with three elements will be 9 (3*3=9). An array with five elements will loop through the array 25 times (5*5=25). An array with 10 elements will cause 100 loops (10*10=100).

Space complexity

How much memory is needed

The space complexity of an algorithm and data structure is relative to how much memory, or RAM, the data structure will take up relative to the amount of data being processed.

The big-O notation for space complexity is similar to time complexity, except that we only care about how much memory has to be allocated during the algorithm's execution relative to the size of the data being processed.

Summary

Thinking about the time and space complexity of DSA is crucial to solving problems using computers efficiently. 

This article will be updated with more examples as I progress my DSA studies.