Big O Notation: What are they talking about?

Kathleen Connaghan
10 min readNov 2, 2021

Come with me to traverse space and time — via code. Well not traverse, anyway this is a journey.

Big O Notation looked like straight up magic when I first saw it. Between the latent math anxiety and the need for visualisation, I was initially lost.

But there is no escaping the need for Big O. If you want a job in software development, you will need to know how to discuss and write code with a decent understanding of Big O Notation.

Reading this will walk you through your options when looking at Big O, as in: what do they look like, what does each one mean, etc. I have crawled the internet so you don’t have to. Here is the amalgamation of a number of articles and tutorials. Cited sources at the bottom.

Let’s get prepared!

Question: What Big O Notation?

Answer: Big O Notation is a way of talking about how long a bit of code takes to run and/or how much space the code takes up in the memory of the computer.

So we’re talking about time complexity and/or space complexity!

Question: Why learn it?

Answer: Well, interview success. We all got bills to pay.

Real Answer: It helps developers be aware of how efficient an algorithm (code solution) can be, meaning (ideally) you are better equipped to write code that works faster in a smaller space of memory. More importantly, you can see flaws and possible improvements to code.

Another Real Answer: Some solutions need to take up less space, while others need to be quicker to run. Understanding Big O is integral to solving the problem you want to solve within code.

Woo! Let’s get stuck in.

Important

In a coding interview, you will be looking not at the average case complexity of an algorithm, but the worst case! So when you think about Big O keep this in mind!

Big O — the “O” stands for order, as in orders of magnitude. Even more fancy, love it, but what are we talking about? Try to think about it in terms of ‘growth rate’. Growth rate, so think of a graph.

Operations as in time.

Elements as in inputs.

Basics: What are my options?

You should be aware of 8 common complexities with Big O:

These are in order, starting with the fastest to the slowest, also see the lovely colour-coded graph above. Graph Source

  • Constant: O(1)
  • Logarithmic: O(log(n))
  • Linear: O(n)
  • Log-linear: O(n log(n))
  • Quadratic: O(n²) also written as O(n ^ 2)
  • Cubic: O(n³) also written as O(n ^ 3)
  • Exponential: O(x^n)
  • Factorial: O(n!)

Okay, lots to digest here. So let’s break it down with each one:

Complexities — Examples, Explanations, Comments

Constant : O(1)

Basics: This does not depend on the size of the input. It is by far the best time complexity to have.

Example: Accessing any single element in an array.

Explanation: Only one operation is done to locate it, even if you are looping through said array.

Code Sample: Constant O(1)

Comments: This will eventually be easily recognisable. You will see it with the most basic and simplistic functions.

Logarithmic: O(log(n))

Basics: Think of trying to find the word “pudding” in the dictionary. You are not going to start with the “A” section to search through each and every individual word until you get to “pudding”. You’d look for the “O, P, Q, ” sections first, then in “P” to look for “pu” and so on. So a divide and conquer approach where each searched section is getting progressively more specific towards your goal/target AND at the same time ruling out large sections of unimportant words.

Cool, that’s the basic idea, but what is a logarithm? Math anxiety coming in hot. Don’t worry.

The definition of logarithm is: A quantity representing the power to which a fixed number (the base) must be raised to produce a given number.

Great, but reading that is not much help, so let’s visualise it.

Check out this short youtube video by selikapro: Big O Notation Series #4: The Secret to Understanding O (log n)!

So after seeing this video it really looks like O(log n) is a lot like binary search.

A Binary Search algorithm has a best case efficiency of O(1) and worst case (average case) efficiency of O(log n). So for this example, we’re going to use binary search (but for big O, we’re assuming the worst case scenario). Before we do any thing else, let’s do a diagram walk through of a binary search. The goal is to help you visualise.

Explanation: We will use a sorted array with 16 elements.

Our target number will be 13: meaning the number we are searching for in the sorted array is right there at the 5th index.

Binary search means we should divide this sorted array repeatedly until we get to the target: 13. Kind of like looking for a word in a dictionary.

Here’s my diagram of what it would look like visually:

Kathleen’s binary search diagram

Not horrible when you can see it, and with the help of the youtube link you can hopefully understand what O(log n) is.

How many layers deep do we need to go? Also known as: how do we get to a specific number by raising the base by a power?

It’s possibly too complex for this beginner guide, so just try to just remember if you are doing binary search, then you are working with logarithmic complexity or O(log n). The below code example does the same thing as the above diagram:

Logarithmic O(log n): Code Example:

Logarithmic Code O(log n)

Comments: Space vs Time complexity is different. It is not always like-for-like. Meaning you can have space complexity of O(1) and a Time complexity of O(n). This is not covered in any depth in this article, but it will be in the next one: Solving for Big O: What am I supposed to be doing?

Linear: O(n)

Basics: Essentially O(n) means the time and the space complexity is dependent on what “n” is. Or rather how much “n” (as in data/input) there is to go through. So be on the lookout for a one-to-one relationship between the data size/input and time to completion. Unlike O (log n), you are going to look at Every. Single. Thing.

Explanation: For loops are an excellent example of O(n). Just think — how long and how much space does it take to go through a for loop? Well the answer is — it depends. Is the for loop looking through a singular string with just 3 letters, or is it going through an array of a million numbers?

Linear: O(n) Code Sample:

Linear Code : O(n)

Comments: Remember, even if you are only looping through half the array, you will still have it as O(n). This is because in calculating the complexity you drop the constants. Constants are briefly explained at the bottom. More detail to come next time.

Log-linear: O ( n log(n) )

Basics: Log-linear is essentially just O(n) multiplied by O(log n). So linear complexity * logarithmic complexity. It is where log(n) operations will happen (n) times. It is common for recursive sorting algorithms.

Explanation: Usually we can drop constants, see below for constants. But O(n) multiplied by O(log n) are not similar values. It is not like O(1) + O(n) = O(n), where adding the two means that the simplistic O(1) is inconsequential when added to O(n). Multiplication is not the same as adding, so they can’t be combined or balled up into one.

A good example would be a Merge Sort algorithm, where you take an unsorted array and merge it together like a zipper going from least to most. Khan Academy has provided this beautiful, albeit blurry diagram below:

Merge Sort Diagram from Khan Academy

So you divide until you get a bunch of single element arrays. Then you start merging them together and scooting the lower of the number to the left and the higher number to the right.

Log-linear O(n log(n)) Code Example:

Log-linear O(n log(n))

Comments: This is time complexity of O(n log(n)). Space complexity would be a bit different and I have not covered it here.

Quadratic O(n²) or O(n²)

Basics: This is known as “O of n squared”. Which is just shorthand for n * n complexity.

Explanation: Whereas O(n) means the input is proportional to the time & space complexity, O(n²) is where it’s the same… but squared. Think of a double for loop nested within itself! Still depends on how many elements you need to loop over, but now you are looping through each one twice. Note: It has to be a nested for loop to be considered O(n²).

Quadratic O(n²) Code Example:

Comments: As you can see O(n²) complexity is not very efficient. You may come across it when using Bubble Sort algorithms. But it doesn’t mean you should avoid it at all costs. The decision comes back to what is it that you need to solve.

Cubic: O(n3) or O(n³)

Overview: Essentially the same idea as Quadratic but cubed. So 3 nested for loops as an example.

Exponential: O(x^n) (x being any given number)

Basics: The complexity increases exponentially with each addition to the input data set. See the graph above? It starts out shallow and then increases exponentially. Takes off like a rocket. Also think of it like this: the number of steps it takes to complete a task is a constant to the nth power.

Explanation: Try to find every combination of numbers for a passcode with the length of n. Brute force takes a long time, and with a length of n it could also mean infinity.

Another helpful way to think of it could be bacteria growing. If a singular bacterium splits and turns into two, then you can quickly see how the growth is exponential. And as with bacteria, eventually you will run out of space/food for the bacteria or kill the host. In this case it may just crash your code. Call stack exceeded. Avoid exponential time.

Exponential O(x^n) Code Example:

A simple code version to visualise this would be the Fibonacci recursion problem.

Comments: Probably best to avoid solutions that have O(x^n), find a better way!

Factorial: O(n!)

Basics: Factorial growth is the worst complexity for efficiency. Factorial is written as n!, so 5! Would be 5 * 4 * 3 * 2 * 1 = 120. 100! Would be 100 * 99 * 98 * 97…. you get the idea.

Explanation: You may see mention of the ‘travelling salesman’ problem. But the simplest version can be seen with a recursive factorial function.

Factorial: O(n!) Code Example:

Comments: You can see how quickly this gets beyond human comprehension. It is why guessing a complex password using brute force is so difficult when the password includes numbers, letters and special characters. Infinity might not be long enough.

Calculating Big O:Drop the constants.

What does this mean?

While calculating Big O complexity, you will deal with so many considerations. You’re identifying the results as “n”, which can be arbitrarily large. So as “n” gets bigger, the smaller aspects or considerations become insignificant to the overall consideration.

Dropping Constants Code Example:

Kathleen’s quick overview of dropping constants in Big O Notation

So that’s the full list. That’s the broad introduction of all the Big O Complexities. Coming soon will be looking at how to answer the question of what is the complexity of a few key example algorithms.

Citations

We are all the sum of the knowledge of those to have come before us! If you wanna be good at coding and solving problems, you need to get good at searching.

Special shout out to the following sources that helped me write this blog post. Check them out for even more insight into Big O Notation:

Big O Cheatsheet

Digital Ocean’s Big O Notation

Jared Neilsen’s blog entry on Big O

Brian Ryu on Dropping Complexity

Ariel Salem’s Easy to Use Guide on Big O

Big O Notation for dummies

Adrian Meija How to find the Big O Complexity

Big O Space and Time Complexity

Youtube:

Big-O notation in 5 minutes — The basics

Big O Notation Series #4: The Secret to Understanding O (log n)!

I hope this walk-through helped. It is a very long one, so like I said, think of it as a jumping off point or a verbose cheatsheet for what are the options when talking about Big O Notation.

--

--