In our previous assignment, we discussed the linear search algorithm. With linear search, if an array contains 10 elements, the algorithm would take, at most, 10 steps. If the array contains 1,000 elements, it would require 1,000 steps. However, simply mentioning the number of steps an algorithm could take is not an effective way to communicate algorithmic efficiency. This is where Big O Notation comes into play.
When we analyze an algorithm's complexity, we aim to understand how its performance scales with the size of the input — this is known as time complexity. Focusing on the exact number of steps can be misleading, as it doesn't accurately capture how the number of steps grows with increasing input size. Big O Notation helps us address this by focusing on the scalability of an algorithm. It allows us to compare and analyze different algorithms more effectively by highlighting how the number of steps changes as the input size increases rather than fixating on the exact step count for any specific input.
The "O" in Big O stands for "order." This refers to the order of magnitude of the algorithm's growth rate. In simple terms, it's about understanding which part of the algorithm has the most significant impact on how its run time increases with larger inputs.
Big O is called "Big" because it's concerned with the biggest factor that affects an algorithm's time complexity. It ignores less significant terms and constants that don't significantly change the overall trend as the input size gets very large.
With Big O Notation, we describe the upper limit or the worst-case scenario of an algorithm's time complexity. The worst-case scenario is the situation where the algorithm takes the longest possible time to complete a task. For linear search, the worst-case happens when the element we're looking for is either the last one in the list or not there at all. In these cases, the algorithm has to check every single element.
That's why linear search has a time complexity of O(N)
, pronounced "oh of N," where N
is the input size. With linear search, the worst-case number of steps to complete the algorithm increases at the same rate as the input, N
increases. With a three-element list, we may have to take 3 steps to search for an element. With a 3 million element list, we may have to take three million steps to search for an element.
In Big O, we often focus on the worst-case because it gives us a limit for how long the algorithm could take. It tells us about the maximum time complexity we can expect, no matter what specific data we feed into the algorithm.
Aside from the worst-case scenario, it can sometimes be valuable to consider the average-case and best-case scenarios when designing and evaluating algorithms for specific use cases or inputs. While we will briefly touch on average-case and best-case scenarios when discussing sorting algorithms, it's important to note that the worst-case scenario is often the most critical consideration. It guarantees the upper bound of an algorithm's time complexity for all possible inputs and ensures that the algorithm won't perform worse than that.
Now, let's delve into various time complexities and examine them in detail.