Decoding the Complexities: A Comprehensive Guide to Big O Notation and Performance Analysis

Renato Francia
4 min readJul 24, 2024
Photo by Isaac Smith on Unsplash

If you’ve studied data structures, you’re likely familiar with the concept of Big O Notation.

Fundamentally, it serves as a metric to estimate the performance scalability of our algorithms.

Using this metric enables us to compare the efficiency of different algorithms and choose the optimal one for a given situation.

Consider the algorithm below for an illustrative example:

void printFirstItem(int numbers[]) {
cout << numbers[0] << endl;
}

This algorithm swiftly retrieves a value from a precomputed array, delivering results with a “speed” of O(1) time complexity.

void printAllItems(int numbers[]) {
for(int num: numbers)
{
cout<< num <<" ";
}
}

Notice that our O(1) algorithm lives inside a loop that will go N-times where N is the size of the array.

Regardless of how fast O(1) is, it will still run N-times.

In analyzing algorithm efficiency, we disregard the less significant terms in favor of the dominant Big O notation, as the smaller values have minimal impact on performance as the input size approaches infinity.

O(N³ + N² + N log N ) ===> O(N³)

--

--

Renato Francia

Software Developer, Digital Nomad, Blogger and Tech Enthusiast.