Solving Big O Notation: What am I supposed to do?
We’ve covered what are the options for Big O Notation. Now let’s look at some JavaScript code and dissect the time and space complexity of a given solution to a code problem.
How do we calculate the time and space complexity of a given solution?
What are we adding up and what should we ignore?
First off, time complexity and space complexity are not like-for-like. Sometimes they are very different.
A given code solution might only take up a small spot on the computer’s RAM when it runs. But that same bit of code might be executed x amount of times. Different considerations for the same bit of code: space required and time required.
We’re going to talk about algorithms. Don’t worry I’ll include examples for the math-anxious people. (Mostly just for me.)
Definition from Britannica:
Algorithm: A specific procedure for solving a well-defined computational problem.
Using algorithms requires an understanding of the options available for solving a problem.
These options to understand include, but are not limited to:
- The hardware
- The network
- The programming language
- The performance constraints of a particular code solution.
So how much space can we use on the computer’s memory? How much time is appropriate to spend while the code works its magic?
The algorithm needs to be correct, as in it fully solves the problem at hand, and does it in the most efficient way.
If you want to look at what are the options for Big O, then check out the previous article I wrote on “Big O Notation: What Are They Talking About?”.
Small reminder: Constants and dropping them.
From my previous post you might remember a basic tip for constants. Our goal is to identity the results as “n”. This can be arbitrarily large. So as “n” gets bigger, the smaller considerations become insignificant to the overall consideration. You will see below when we have examples that equate to: O(1) + O(n) + O(3) = means O(n) in the big picture.
Good stuff! Let’s look at Constant, Linear, Logarithmic and Log Linear time complexity solutions. Solving Big O Notation — The Big Solve!
Constant O(1)
Prompt: Check if a number is odd or even.
Code Example:
Time Complexity: O(1) Constant Time
Space Complexity: O(1) Constant Space
Why?
Because the operations for each example (if else & ternary versions above) are only running one time, it would take the system the same amount of time and space to run regardless of the input. So if I put in 10,000 rather than 11, I would still have it as O(1). We are not adding any new variables to store the answer, so there are no extra space requirements.
Other examples of Constant Time:
- Find a value in a map
- Print the first element from a list
Linear O(n)
Prompt: Find the largest number in an unsorted array.
Code Example:
Time Complexity: O(n) Linear Time
Space Complexity: O(n) Linear Time
Why?
Time: We have other operations happening with this, but they are inconsequential when compared to the loop. The loop depends on the input. Above, there are seven elements in the array, but if it was 700, it would still need to loop through the array 700 times. The time complexity is linear because it will take longer the bigger the input.
Space: Again the memory used up by adding a variable to the function is inconsequential to the input of the array in terms of memory. So the space we care about is what takes up the most space.
This is a good example of looking at Big O complexity in terms of what is the potential worst case. If the input array was a million elements, you will not be factoring in some small constant time operations into your complexity.
Other Examples of Linear Time:
- Print all the values in a list
- Sum all the numbers in an array
Logarithmic O(log n)
Prompt: Find a word in a dictionary or find a number in a sorted array.
Code Example:
Time Complexity: O(log n) Logarithmic Time
Space Complexity: O(1) Constant Space
Why?
Time: Because the provided array is sorted already, we can divide it in half and check if the element we’re looking for is greater or lesser than the current middle. Once we know which way to go, we can keep splitting the array half. We are discarding the half we have ruled out, until we get the target element. This saves a lot of time because we are not looking at every single element individually. We are cutting the operations in half at each step.
Space: Because we are doing this binary search iteratively (as in not recursively) we are storing three variables (start, end, midpoint) and that is all we need to make this search. O(3) is still a constant which should be rounded to O(1).
Other Examples Logarithmic Time:
- Binary Search with recursion (Space complexity would be O(log n) in that case).
Quadratic O(n²)
Prompt: Check for duplicates in an array using a nested for loop.
Code Example:
Time Complexity: O(n²) Quadratic Time
Space Complexity: O(1) Constant Space
Why?
Time: The input array will be looped over twice for each element. So [ i ] would be O(n) but then [ j ] is also going to run O(n) times within the outer loop. Because we’re going over each element (n) twice, we can consider the input traversal to be squared, meaning O(n²). If the array has 3 elements it will look at it 9 times, with 4 elements it will have to look at it 16 times, and so on.
Line 6: The names array will run five times, but then line 7 will also run 5 more times. So there will be 25 checks. The array could have 500 elements which would mean 250,000 checks. This is why this is considered a ‘naive approach’, your input (n) will be checked (n) x (n) times. Meaning a large input will take a long time and might not be the most efficient.
Space: We are only creating 2 additional variables outside of the input arrays: [ i ] and [ j ]. So again: O(1) + O(1) = O(2). As a constant this will be just shorted to O(1).
Other Examples of Quadratic Time:
- Finding all ordered pairs in an array
- Sorting using bubble sort or insertion sort
Log-linear: O(n log(n))
Prompt: Combine two separate arrays into one array, sorted lowest to highest (Merge Sort).
Code Example:
Time Complexity: O(n log n) Loglinear Time
Space Complexity: O(n) Linear Space
Why?
Time: In the merge() function we are iterating over two arrays, which means we will need to look at each element at least once. This is similar to the linear time example above, meaning O(n).
But, in the mergeSorter() function we are doing something similar to the binary search we did above in Logarithmic Time O(log n).
The O(n) and O(log n) are not like for like! We’re doing two different operations at the same time, and because they are different, we can’t just combine the two like we do when we see O(3) being rounded to O(1).
Space: Now, the space taken up by this is dependent on the two arrays we are combining. We will need to take the two arrays and combine them into the result array. That could take up ten spaces or a thousand spaces, it depends on the combined length of the two input arrays we are given.
Other Examples Log-linear Time:
- Quick Sort
- Heap Sort
- Tree Sort
These four examples are a good starting point for anyone who is new to Big O analysis. There are others that I have not covered here: Exponential (fibonacci sequence) and Factorial (permutations of a string) time complexities.
If you are using either of those in your coding solutions, you probably need to re-think your solutions, big style. They are often so inefficient that, I would argue, their only purpose would be to show you what not to do with your poor computer’s time and space abilities.
I hope this was helpful for looking at code to see how to actually calculate the Big O complexity. Happy coding!