Stack Bash - Master data structures and algorithms

Big O Notation

Big O Notation is a way to measure the performance of an algorithm, or any function.
When analyzing performance, we usually think about:
  1. The amount of time an algorithm takes to run.
  2. The amount of space it requires to execute.
With Big O, we measure time and space complexity with respect to the input size, commonly referred to as the variable n.
Big O Complexity Chart
Let's dive into some of the most common ways to describe algorithms using Big O.

Constant complexity - O(1)

A function that runs in constant time means that it runs in the same number of operations regardless of the size of the input.
For example, the following function runs in constant time:
def print_size(names: list[str]):
print('there are ', len(names), ' names in this list')
print('the end')
No matter how many names are in names, the function print_size will always execute 2 operations.
In Big O Notation, an algorithm with constant time complexity can also be expressed as O(1).

Linear Complexity - O(n)

Let's take a look at this function in Python:
def print_names(names: list[int]):
for name in names:
print(name)
We can see that when the function print_names has a for loop that prints each name in the input list of names.
Let's say the length of the input list is n. The function print_names runs n times.
In Big O Notation, we express the time complexity of the function print_names as O(n), pronounced Big O of N

Another example

def print_names(names: list[str]):
for name in names:
print('Hi my name is ', name)
for name in names:
print('Again, my name is 'name)
This time 2 for loops run n times each, which gives us O(2 * n) time complexity.
But in Big o notation, we just constants and call this O(n), or linear time complexity, just like the first example.

Complexity dominance

Looking at the following function:
def print_names(names: list[str]):
print('hello there')
for name in names:
print('Hi my name is ', name)
This function has one constant operation (the first print), and then a for loop, which runs in linear time.
When a function has operations that run in various time complexities, the function's time compelxity is the most expensive operation's time complexity.
Since linear is more dominant than constant, the function print_names runs in linear time, or O(n).

Logarithmic Complexity - O(log n)

Logarithmic complexity is expressed as O(log n).
Algorithms with logarithmic complexy often behave similarly - in each iteration, the problem is cut in half, running log(n) iterations.
This is more efficient than linear search, where every item is checked in the input array.
A common example of an algorithm that takes logarithmic time is searching for a number in a list of sorted numbers.
Let's use Binary Search as an example. In each iteration of binary search of a target number in a sorted list of numbers, we check the middle number in the sorted list.
  • If the target is less than the middle, we can throw away the right half of the list and search again in the left
  • If the target is greater, then it must be in the right half of the list.
  • Repeat the same check in each iteration until the target is found.
  • If there are no more items left in the window of search, then the target doesn't exist.
Logarithmic algorithms completes in shorter time than one with linear complexity, where there are n iterations.

Quadratic Complexity - O(n2)

Let's look at this example:
def print_names(names: list[str]):
# first loop
for name1 in names:
for name2 in names:
print(name1, ' and ', name2, ' are friends')
# second loop
for name in names:
print('Again, my name is 'name)
The first loop runs n times, for each n. So it runs in n * n time, or n2.
Even if the second loop runs in linear time, the first loop's time complexity O(n2) dominates.
So this function's time complexity is O(n2).

Exponential Complexity - O(2n)

The number of operations in an algorithm with exponential complexity rapidly grow in each iteration.
This is usually required if you need to compute various permutations of the input, or in recursive functions.
For example, the Fibonacci Sequence has a recursive implementation with exponential time complexity:
def fib(n: int):
if n < 2:
return n
return fib(n - 1) + fib(n - 2)
In each recursive call, there are repeated computations done.
Sometimes there are ways to optimize algorithms with exponential complexity, like in the case of fibonacci.
Generally, exponential complexity is often the brute-force algorithm.

Space Complexity

All ways to talk about complexity applies to space as well.
For example, this function requires constant space:
def print_names(names: list[str]):
print('hello there')
While this one takes O(n) space.
def has_duplicate(names: list[str]) -> bool:
names = set()
for name in names:
if name in names:
return True
names.add(name)
return False
The set names is used to keep track of ones we've seen. Worst-case scenario, we build up a set with n names.

Want to confidently pass your next coding interview?

Stack Bash helps new and veteran software engineers master data structures and algorithms for technical interviews.