Exploring Time Complexities

O(1)

An algorithm with a time complexity of O(1) is one where the number of steps required to complete an operation stays the same, regardless of the size of the input. This constancy means that the operation does not become slower as the input size increases. It's important to note that O(1) doesn't imply the operation takes only one step; it means the number of steps doesn't vary with input size. If we're processing a list, the input size, or N, can be 10, or 10,000,000 and our algorithm will take the same number of steps. That's why we don't see N in this time complexity at all -- it doesn't matter.

A classic example of an O(1) operation is retrieving a value from a hash table, such as an object in JavaScript. In a hash table, data is organized in key-value pairs. Accessing a value using its key is a constant-time operation. It doesn't matter how large the hash table is or how many elements it contains; looking up a value with its key always takes the same amount of time. This efficiency is thanks to a hashing function, which computes an index for where the value is stored. When we add a key to our object, JavaScript will use this hash function on the key to compute where the value should be stored. When we later want to retreive the value, we can use the same key to get the same computed index and JavaScript knows exactly where to find our value without having to search through the entire object.

Consider the following JavaScript function:

function getName(obj) {
  return obj['name']
}

let person = {"name": "Srdjan", "age": 39, "job": "Software Engineer"}
getName(person) // 'Srdjan'

In this snippet, retrieving the 'name' property from the person object is an O(1) operation. The number of steps required to access this property remains constant, irrespective of how many properties the object contains.

Another example of O(1) complexity is accessing an array element by its index. The size of the array does not affect the steps required to access a particular element. This is because arrays have a fixed memory layout, allowing each element's location to be determined directly from its index.

function sumFirstAndLast(arr) {
  if (arr.length > 1) {
    return arr[0] + arr[arr.length - 1]
  }
  return 0
}
sumFirstAndLast([1,2,3,4,5,6]) // 7

Let's clarify the time complexity in the context of this function:

  • Array Length Check - When we perform the check if (arr.length > 1), it involves two fundamental operations:
    • Accessing the length property of the array. This is a direct property access operation and is done in constant time, as the length property is stored with the array object and does not require iterating through the array.
    • Comparing this length with the number 1. This comparison is a basic operation that occurs in constant time.

Together, these two operations are still considered O(1) because they are executed in a fixed, predictable amount of time that does not change regardless of the array's size.

  • Accessing and Summing Elements - If the array has more than one element, the function then performs the following actions:
    • Accesses the first element (arr[0]). This is anO(1) operation because accessing an element at a specific index in an array is done in constant time.
    • Accesses the last element (arr[arr.length - 1]). Similar to accessing the first element, this operation is also O(1). The index of the last element is calculated using the length property, and then the element at that index is directly accessed.
    • Sums these two elements. The arithmetic addition of two numbers is a basic operation performed in constant time.
  • Return Statement: Finally, the function returns the sum (or 0 if the array has less than 2 elements). The return operation is a single step and is constant in time.

In summary, the sumFirstAndLast function involves a fixed sequence of steps. Even if our input grows from 1,000 to 1,000,000, the number of operations will remain the same. It's true that we have several constant steps, but the most complex step in this sequence is performed in constant time, and as such, we can consider sumFirstAndLast to complete in constant time.

O(logN)

When we discuss an algorithm with O(logN) time complexity, we are referring to a situation where the number of steps necessary to complete a task increases in proportion to the logarithm of the input size. This concept might seem abstract at first, so let's unpack it further.

In mathematics, the logarithm of a number is a mathematical operation that indicates the exponent to which a fixed value (known as the base) must be raised to obtain that number. In the context of algorithm complexity, we often use a specific type of logarithm known as the logarithm base 2, denoted as log2.

When we say "logarithm of the input size," we are referring to the number of times we need to divide the input size by 2 until we reach a value of 1.

For example, let's consider a collection with 8 elements. The logarithm base 2 of 8 is 3 because we can divide 8 by 2 three times to reach 1: 8 ÷ 2 = 4, 4 ÷ 2 = 2, and 2 ÷ 2 = 1.

A well-known example of O(logN) complexity is the binary search algorithm. We briefly looked at binary search in a previous assignment. In binary search, we find a specific value in a sorted array by repeatedly dividing the search space in half. Here's how we can implement it in JavaScript:

function binarySearch(sortedArray, target) {
  let start = 0;
  let end = sortedArray.length - 1;

  while (start <= end) {
    let mid = Math.floor((start + end) / 2);
    if (sortedArray[mid] === target) {
      return mid; // Target found
    } else if (sortedArray[mid] < target) {
      start = mid + 1; // Search right half
    } else {
      end = mid - 1; // Search left half
    }
  }
  return -1; // Target not found
}

In this binary search function, we begin by examining the middle element. If it's not the target, we eliminate either the left or right half of the array, depending on whether the target is greater or smaller than the middle element. This process is repeated, halving the search space each time.

Why is Binary Search O(logN)?

The logarithmic nature of binary search becomes clear when we consider large arrays. If you have an array of 1,000 elements, a binary search would need about 10 steps in the worst case (as log2(1000) is approximately 10). Doubling the array size to 2,000 elements increases the steps needed only by one, to approximately 11 steps. This demonstrates logarithmic growth, where increasing the number of elements doesn't proportionally increase the number of steps required.

To recap, logarithmic algorithms, like binary search, are efficient, especially with large data sets. They considerably reduce the number of steps needed compared to linear (O(N)) algorithms, which we will discuss in the upcoming section. Each step in an O(logN) algorithm drastically reduces the remaining problem size.

O(N)

The O(N) time complexity means that the performance of an algorithm or operation is directly proportional to the size of the input, denoted by N. This direct proportionality means that there is a one-to-one relationship between the increase in input size and the corresponding increase in the number of operations or time required. Put simply, if you double the size of the input, the number of steps or the time needed to complete the task also doubles.

To illustrate O(N) complexity, let's revisit the linear search. In this type of search, the algorithm checks each element in a collection one by one to find a target element.

function linearSearch(array, target) {
  for (let i = 0; i < array.length; i++) {
    if (array[i] === target) {
      return i; // Element found
    }
  }
  return -1; // Element not found
}

In this function, if the target element is the very last item in the array or not present at all, the algorithm will have to check every single element. This results in N comparisons, where N is the number of elements in the array. It's important to note that linear search does not require the data to be sorted, which is why it must inspect each element in turn.

Why is Linear Search O(N)?

The linear search is a perfect demonstration of O(N) complexity because its performance is directly tied to the number of elements in the array. If the array has 10 items, it may take up to 10 checks to find the target or conclude it's not there. If the array has 1,000 items, it may take up to 1,000 checks.

This linear relationship makes O(N) algorithms predictable but not always efficient, especially with large data sets. As the input grows, the time or steps increase at the same rate.

Examples of O(N) Operations:

  • Iterating Over an Array or List: Whenever you have to go through each element of a list or array, you're performing an O(N) operation. The time taken is proportional to the number of elements you're iterating over.

  • Copying Elements: Copying each element from one array to another is another example of an O(N) operation. The number of copy operations is equal to the number of elements in the array.

O(NlogN)

The O(NlogN) time complexity represents a scenario where the time an algorithm takes grows in proportion to N (the size of the input) multiplied by logN (the logarithm of N). This complexity is seen in algorithms that perform a logarithmic operation multiple times, once for each element in the input.

Imagine we have several sorted arrays and we want to find a specific value in each of these arrays using binary search. Here's how it might look in JavaScript (we'll use the binarySearch function from above):

function binarySearch(sortedArray, target) {
  let start = 0;
  let end = sortedArray.length - 1;

  while (start <= end) {
    let mid = Math.floor((start + end) / 2);
    if (sortedArray[mid] === target) {
      return mid; // Target found
    } else if (sortedArray[mid] < target) {
      start = mid + 1; // Search right half
    } else {
      end = mid - 1; // Search left half
    }
  }
  return -1; // Target not found
}

function searchInMultipleArrays(listOfArrays, target) {
  for (let i = 0; i < listOfArrays.length; i++) {
    if (binarySearch(listOfArrays[i], target) !== -1) {
      console.log(`Target found in array: ${i}`);
    } else {
      console.log(`Target not found in array`);
    }
  }
}

In this code:

  • binarySearch function: This performs a binary search on a sorted array to find a target element. The binary search has a time complexity of O(logN) where N is the size of the array.
  • searchInMultipleArrays function: This function iterates over a list of arrays (listOfArrays.length) and performs a binary search in each array. If we assume M is the number of arrays, then this iteration is performed M times.

In our example, we encounter two variables: M (the number of arrays) and N (the number of elements in each array). If we assume that the number of arrays (M) is proportional to the size of each array (N), or if we are dealing with a single dataset where these two factors are essentially the same, the overall time complexity is O(NlogN).

The key to understanding O(NlogN) complexity lies in recognizing the combination of linear and logarithmic components:

  • Linear Component (M): The outer loop in searchInMultipleArrays runs once for each array in listOfArrays. If there are M arrays, this loop runs M times.
  • Logarithmic Component (logN): Within each iteration of the loop, a binary search is performed on an array. Binary search has a O(logN) complexity for each array.

The overall time complexity of O(MlogN) arises from performing a logarithmic time complexity operation (O(logN)) repeatedly in a linear sequence (M times).

This raises an important question: is the correct notation O(MlogN) or O(NlogN)? The answer depends on the context of the data sets being discussed. We use different variables, M and N, to represent distinct data sets: M is the length of listOfArrays, and N is the number of elements in a subarray. This distinction will be explored further in the next assignment.

In scenarios where we deal with a single data set, the notation would be O(NlogN). An example of this is observed in sorting algorithms like QuickSort, which will be discussed in later lessons. In fact, O(NlogN) time efficiency is most commonly associated with efficient sorting algorithms such as MergeSort and QuickSort. These algorithms function by breaking down the problem (N) into smaller parts and then efficiently solving each part (logN).

The key takeaway here is that O(NlogN) time complexity is indicative of algorithms that combine a linear and a logarithmic component. It is often encountered in scenarios where an operation with O(logN) complexity is performed repeatedly for N times.

O(N^2)

Quadratic time complexity, denoted as O(N^2), describes algorithms where the number of steps or operations required is proportional to the square of the input size (N). In practical terms, this means if the size of the input doubles, the time or number of steps required increases by a factor of four (since 2 squared equals 4).

In the literature, you may frequently encounter the quadratic notation written as N2, where the number 2 is presented as a superscript. Additionally, you may come across the notation N^2, represented with a caret sign (^) denoting exponentiation. It is important to understand that these notations are interchangeable and convey the same meaning. We will use the latter one, O(N^2), in this course.

To demonstrate quadratic complexity, let’s consider a scenario where you have two nested loops iterating over an array. This situation commonly arises in tasks like identifying all possible pairs of elements in the array or comparing each element with every other element.

Here’s a JavaScript function that illustrates this:

function findPairs(array) {
  for (let i = 0; i < array.length; i++) {
    for (let j = 0; j < array.length; j++) {
      console.log(`Pair: ${array[i]}, ${array[j]}`);
    }
  }
}

findPairs([1, 2]); // Input length of 2, prints out 4 pairs
findPairs(['a', 'b', 'c', 'd', 'e']); // Input length of 5, prints out 25 pairs

In this function, findPairs, we have two nested for loops:

  • The outer loop runs N times, where N is the length of the array.
  • For each iteration of the outer loop, the inner loop also runs N times.

As a result, the total number of iterations becomes N * N, or N^2. This means that for an array of 10 elements, the function will perform 100 iterations (10 * 10), illustrating the quadratic growth and thereby the O(N^2) time complexity.

O(N^2) complexity is often seen in sorting algorithms like bubble sort and selection sort, which we will explore in upcoming lessons.

It's important to understand that while O(N^2) algorithms are suitable for small to medium-sized datasets, they might not be the most efficient choice for larger datasets. Typically, the PEDAC approach enables us to solve problems, such as finding all pairs of elements, with quadratic time complexity. However, be prepared to address questions about improving efficiency, which often leads to exploring more efficient algorithms with lower time complexities, as we will discuss in upcoming lessons.

O(N^3)

Cubic time complexity, denoted as O(N^3), describes algorithms where the number of steps or operations required is proportional to the cube of the input size (N). In practical terms, this means if the size of the input doubles, the time or number of steps required increases by a factor of eight (since 2 cubed equals 8).

To illustrate cubic complexity, let's extend our previous findPairs function to a findTriplets function by adding another nested loop:

function findTriplets(array) {
  for (let i = 0; i < array.length; i++) {
    for (let j = 0; j < array.length; j++) {
      for (let k = 0; k < array.length; k++) {
        console.log(`Triplet: ${array[i]}, ${array[j]}, ${array[k]}`);
      }
    }
  }
}

findTriplets([1, 2]); // Input length of 2, prints out 8 triplets
findTriplets(['a', 'b', 'c']); // Input length of 3, prints out 27 triplets

In this findTriplets function, we have three nested loops, each running N times, where N is the array's length. The total number of iterations becomes N * N * N, or N^3.

For an array of 10 elements, the function will perform 1000 iterations (10 * 10 * 10), illustrating the cubic growth.

This example demonstrates how adding just one more nested loop to our quadratic O(N^2) algorithm results in a cubic O(N^3) time complexity.

You might have noticed a pattern: each additional nested loop that iterates over the entire input increments the exponent in the time complexity. For instance:

Two nested loops: O(N^2) (quadratic) Three nested loops: O(N^3) (cubic) Four nested loops: O(N^4) (quartic) And so on...

As we add more nested loops, the time complexity grows exponentially, making the algorithm increasingly inefficient for larger inputs. This is why algorithms with nested loops are often targets for optimization, especially when dealing with large datasets. They are sometimes seen in naive solutions to problems involving multi-dimensional data or certain graph algorithms. However, it's often possible to optimize these algorithms to achieve better time complexity.

O(2^N)

Exponential time complexity, represented as O(2^N), signifies that the execution time of an algorithm grows exponentially with the size of the input. In other words, as the input size increases, the execution time increases dramatically. For each additional element in the input, the execution time of the algorithm doubles. This exponential growth occurs because the algorithm generates an increasing number of subproblems or recursive calls with each input element.

In the literature, it is common to come across the exponential notation 2N, where the exponent N is written as a superscript. Alternatively, you may encounter the notation 2^N with a caret sign (^) denoting exponentiation. Again, we should recognize that both notations are equivalent and represent the same concept. We will adopt the latter notation, 2^N, throughout this course.

A common example of an algorithm with exponential time complexity is non-memoized recursion. In such cases, the algorithm often recalculates the same subproblems multiple times, leading to a dramatic increase in the number of operations.

For instance, consider a naive recursive implementation of the Fibonacci sequence, where each call generates two additional calls:

function fibonacci(n) {
  if (n <= 1) {
    return n;
  }
  return fibonacci(n - 1) + fibonacci(n - 2);
}

In this fibonacci function:

  • When n is 0 or 1 (the base cases), the function simply returns n.
  • For other values, it recursively calls itself twice: fibonacci(n - 1) and fibonacci(n - 2).

With each non-base case input, the number of function calls doubles, leading to exponential growth. For example, fibonacci(3) involves fibonacci(2) and fibonacci(1), and fibonacci(2) itself requires additional calls to fibonacci(1) and fibonacci(0). With this implementation, we're invoking fibonacci with an argument of 1 multiple times.

In upcoming lessons, we will discuss recursion and the time and space complexity of recursive algorithms in greater depth. For now, it's important to remember that due to their exponential growth, algorithms with this time complexity are considered highly inefficient. Therefore, alternative approaches are often explored to enhance the efficiency of these algorithms.

O(N!)

Factorial time complexity, represented as O(N!), indicates an even more rapid growth in algorithm execution time than exponential time complexity. In this case, the execution time increases at a factorial rate relative to the size of the input.

The notation N! (read as "N factorial") refers to the product of all positive integers up to N. For example, if N is 5, then 5! (5 factorial) is calculated as 5 x 4 x 3 x 2 x 1, which equals 120. This means that with every additional element in the input, the total number of operations multiplies by that element, leading to extremely rapid growth.

Factorial complexity is typically encountered in scenarios requiring the exploration of all possible permutations or combinations. A classic example is the pathfinding problem, such as the Traveling Salesman Problem, where the objective is to find the shortest possible route that visits a given set of cities and returns to the origin city.

For instance, if a traveler needs to determine the shortest route among N cities, the number of potential paths to assess is factorial in nature. As the number of cities (N) increases, the number of possible routes (N!) grows extremely fast, making the algorithm exceedingly slow and impractical for a large number of cities.

Both exponential and factorial time complexities exhibit rapid increases in execution time with growing input sizes. However, factorial time complexity escalates much more quickly than exponential complexity, making it even less suitable for large-scale problems.

Visualizing Time Complexities

To help visualize these complexities, we've provided a Big-O Complexity Chart below. This chart illustrates how different time complexities compare. It will give you a clearer understanding of the relative efficiency of various algorithms. The chart uses a color scale from dark red (poor time complexity) to light green (excellent time complexity).