Big O Notation-what is this thing?
You must have heard about Big O notation when you started learning to code. This buzzword is all over the place and is initially confusing.
Ask yourself, how does Google search work? How do you get the search results listed based on relevancy? This depends on the algorithm used. Algorithms are steps performed on some data to solve some problem(s).
What is Big O notation?
It is a theoretical measure of the execution of an algorithm. That’s it! It is a fancy mathematical notation that shows how efficient an algorithm works relative to its input size. If you really want to know the details and mathematical equations to understand Big O notation, first you need to understand logarithms. You can solve a problem in any number of ways using different algorithms, and there are different ways to measure complexities. They are using Big Omega(Ω), Theta (θ) and Omicron(O). Below is a simple example.
const numbers = [4, 7, 9, 10, 25, 46, 15]/ * If we traverse through the numbers in this array to find
4 - The number 4 is at the beginning of the array. It takes least number of operations and the complexity will be measured in Omega(Ω), which is the best case.10 - To find 10 you need to keep moving till the middle of the array and the complexity will be measured in Theta (θ), which is an average case.15 - For 15, it would take maximum number of operations as we need to traverse all the elements before reaching to find it. This will be worst case which is Omicron(O) or Big O.* /
Big O always measures the worse case [Remember: Best and Average case are measured by Omega(Ω) and Theta (θ) and not by Big O]. The two things to consider while measuring the efficiency of an algorithm are using Time and Space complexity.

Time Complexity
Imagine you have two code snippets(see below) one took longer than the other. Time complexity is always measured in terms of number of operations to perform something. You can execute the same code to run on a system that has faster processor or a system with a low end processor. The number of operations will still be exactly the same.
Code 1function a() {// Does some operation in 25 secs}Code 2function b() {// Does same operation as function a() in 10 secs}
Space Complexity

This relates to the space or the memory that it takes in your computer to run a specific code. In the above example, function b( ) which took 10 seconds to run takes a lot of memory space over the function a( ).
Understand that some algorithms take longer time and lesser memory space to execute while some take shorter time with more space.
In Big O, we have different ways to evaluate the complexities based on time taken for an operation to complete over the input size(n).

There are number of Big O complexities and in this article, we will see four of them.
1. O(n)
Big O(n) is also referred to as linear time.
In the below function, a counter is passed as a parameter. The counter size is set to 10. Here the counter size determines the and number execution which is proportional to the size of n. The loop will execute for 10 times and exit.


2. O(n²)
Big O(n²) is also referred to as quadratic time.
Similarly, here there are 2 for loops and where n= 2, which will execute the function n * n. See the output of the code snippet on the right.


Between the two O(n) and O(n²), which one do you think is a better algorithm?
Yes, O(n) will be better over O(n²).
3. O(1)
In Big O(1), the number of operations does not change when input size increases. In the example below, the function will always return the sum. This is referred to as constant time. This is the most efficient Big O.
function c(add) {return add + add // here the number of operation is 1}

4. Big O (log n)
Big O(log n) is also referred to as logarithmic time.
This complexity requires basic knowledge of logarithms.
Let’s take an example of an array (collection of items that is sorted in the computer memory) [1, 2, 3, 4, 5, 6, 7, 8]. Here if we have to find number 8 we do not need to traverse the entire array. We know this array is sorted, and the number we are looking for is in the later part of the array. So, we can discard the first part of the array, [1, 2, 3, 4] . Then we can discard [5, 6] and finally, we can also discard [7] . It took as 3 operations when we tried to find 8 in this array. This is also called Divide and Conquer.
This can also be expressed in logarithm as 2³= 8
⇒ the logarithm of 8 to base 2 = 3
This is very useful when you need to find a specific data from a large dataset of sorted array. However, this works only for sorted arrays.

Note:
When we simplify Big O, we need to discard constants and non-dominants.
Example for discarding constants: If we have two for loops inside this function, this would be O(n+n). In this case, you will discard the 2 from 2n. This will still be Big O(n).

Example for discarding non-dominants:We need to only consider the dominant ones when we simplify Big O notation. See example below.

To Summarize, Big O(1) / constant time is the most efficient followed by Big O(log n), Big O(n) /linear time and Big O (n²).

I hope you understood about Big O notation. If you like this article, please consider following and giving me a clap!