Definition: What Is Time Complexity?
Time complexity measures how long an algorithm takes to run as the input size increases. It describes how the execution time grows with input size using Big O notation. Time complexity helps you understand and compare algorithm efficiency.
Think of time complexity like this: if you have 10 items, how long does your algorithm take? What about 100 items? 1,000 items? Time complexity tells you how the time grows. Some algorithms take the same time regardless of input size (O(1)), while others take much longer as input grows (O(n²)).
Time complexity is written using Big O notation: O(1), O(n), O(log n), O(n²), etc. The "O" stands for "order of" and describes the worst-case scenario. Understanding time complexity helps you write efficient code and choose the right algorithm for your problem.
Key Point: Time complexity measures how execution time grows with input size. It's written using Big O notation (O(1), O(n), O(log n), O(n²)). Lower time complexity means faster algorithms. Understanding time complexity helps you write efficient code.
What: Understanding Time Complexity Types
Common time complexity types:
O(1) - Constant Time
O(1) means constant time—the algorithm takes the same amount of time regardless of input size. Examples: accessing array element by index (arr[5]), hash table lookup, adding to end of array. O(1) is the fastest time complexity. Execution time doesn't change whether you have 10 items or 10 million items.
Example: Getting the first element of an array always takes 1 operation, whether the array has 10 or 10,000 elements
O(n) - Linear Time
O(n) means linear time—execution time grows proportionally with input size. If input doubles, time doubles. Examples: looping through array, finding element in unsorted array, printing all elements. O(n) is efficient for single-pass algorithms. Time grows linearly: 10 items = 10 operations, 100 items = 100 operations.
Example: Finding a number in an unsorted list requires checking each item once
O(log n) - Logarithmic Time
O(log n) means logarithmic time—execution time grows slowly as input increases. Examples: binary search, balanced binary tree operations. O(log n) is very efficient. With 1 million items, binary search needs only ~20 comparisons. Time grows logarithmically: 10 items = 3 operations, 100 items = 7 operations, 1000 items = 10 operations.
Example: Binary search halves the search space each time, so it needs very few comparisons
O(n²) - Quadratic Time
O(n²) means quadratic time—execution time grows quickly with input size. If input doubles, time quadruples. Examples: nested loops, bubble sort, selection sort. O(n²) is inefficient for large inputs. Time grows quadratically: 10 items = 100 operations, 100 items = 10,000 operations. Avoid O(n²) when possible.
Example: Comparing every element with every other element requires n × n operations
Time Complexity Comparison Chart
Chart showing relative execution time for different time complexities (for n=100)
Important: Understanding O(1), O(n), O(log n), and O(n²) helps you choose efficient algorithms. O(1) is fastest, O(log n) is very fast, O(n) is good, and O(n²) is slow. Always aim for lower time complexity when possible.
When: When to Consider Time Complexity
Consider time complexity in these situations:
When Writing Algorithms
When writing algorithms, consider time complexity to ensure efficiency. Choose algorithms with lower time complexity (O(1), O(log n), O(n)) over higher complexity (O(n²), O(n³)). Efficient algorithms run faster and use fewer resources, especially with large inputs.
When Comparing Solutions
When comparing multiple solutions to the same problem, use time complexity to choose the best one. A solution with O(n) is better than O(n²) for large inputs. Time complexity helps you make informed decisions about which algorithm to use.
When Optimizing Code
When optimizing code for performance, analyze time complexity to identify bottlenecks. If your code has O(n²) operations, optimize them to O(n) or O(log n) if possible. Time complexity analysis guides optimization efforts.
When Preparing for Interviews
When preparing for coding interviews, understanding time complexity is essential. Interviewers often ask about time complexity of your solutions. Being able to analyze and explain time complexity demonstrates strong problem-solving skills.
Common Scenario: Consider time complexity when writing algorithms, comparing solutions, optimizing code, and preparing for interviews. Understanding time complexity helps you write efficient, scalable code.
How To: Calculate Time Complexity with Examples
Learn to calculate time complexity with these examples:
Example 1: O(1) - Constant Time
Accessing an array element by index:
# Python example
arr = [10, 20, 30, 40, 50]
first_element = arr[0] # O(1) - constant time
# Takes same time whether array has 5 or 5 million elementsO(1) Time Complexity Graph
Time stays constant regardless of input size
Example 2: O(n) - Linear Time
Looping through an array:
# Python example
arr = [10, 20, 30, 40, 50]
for num in arr: # O(n) - linear time
print(num)
# Time grows proportionally: 5 elements = 5 operations, 100 elements = 100 operationsO(n) Time Complexity Graph
Time grows linearly with input size
Example 3: O(log n) - Logarithmic Time
Binary search:
# Python example - Binary Search
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
# O(log n) - halves search space each time
# 1 million items needs only ~20 comparisonsO(log n) Time Complexity Graph
Time grows slowly (logarithmically) with input size
Example 4: O(n²) - Quadratic Time
Nested loops:
# Python example - Nested loops
arr = [1, 2, 3, 4, 5]
for i in arr: # Outer loop: n iterations
for j in arr: # Inner loop: n iterations
print(i, j) # Total: n × n = n² operations
# O(n²) - quadratic time
# 5 elements = 25 operations, 100 elements = 10,000 operationsO(n²) Time Complexity Graph
Time grows quickly (quadratically) with input size
Time Complexity Growth Comparison
Table showing how operations grow with input size for different time complexities
Best Practice: Calculate time complexity by counting operations: O(1) = constant operations, O(n) = n operations, O(log n) = log n operations, O(n²) = n² operations. Use the charts and examples above to visualize how time complexity grows with input size.
Why: Why Time Complexity Matters
Time complexity matters for these reasons:
Performance
Lower time complexity means faster algorithms. O(1) and O(log n) algorithms run much faster than O(n²) algorithms, especially with large inputs. Understanding time complexity helps you write performant code that scales well.
Scalability
Time complexity determines how well your code scales. O(n) code handles 10x more input in 10x more time, but O(n²) code needs 100x more time. Scalable code uses efficient algorithms with lower time complexity.
Resource Usage
Lower time complexity uses fewer CPU cycles and less energy. Efficient algorithms reduce server costs and improve user experience. Time complexity directly impacts resource consumption and costs.
Problem Solving
Understanding time complexity improves problem-solving skills. It helps you choose the right algorithm for each problem and optimize solutions. Time complexity analysis is essential for technical interviews and real-world programming.
Important: Time complexity matters because it affects performance, scalability, resource usage, and problem-solving. Lower time complexity means faster, more efficient code. Always consider time complexity when writing algorithms.
Frequently Asked Questions
What is time complexity?
Time complexity measures how long an algorithm takes to run as the input size increases. It describes how the execution time grows with input size using Big O notation (O(1), O(n), O(log n), O(n²)). O(1) is constant time (fastest), O(n) is linear time, O(log n) is logarithmic time, and O(n²) is quadratic time (slowest). Time complexity helps compare algorithm efficiency.
What is Big O notation?
Big O notation describes the worst-case time complexity of an algorithm. It shows how execution time grows with input size. O(1) means constant time (same time regardless of input), O(n) means linear time (time grows proportionally with input), O(log n) means logarithmic time (time grows slowly), and O(n²) means quadratic time (time grows quickly). Big O focuses on the worst-case scenario.
What is O(1) time complexity?
O(1) is constant time complexity—the algorithm takes the same amount of time regardless of input size. Examples: accessing array element by index, hash table lookup, adding to end of array. O(1) is the fastest time complexity. The execution time doesn't change whether you have 10 items or 10 million items.
What is O(n) time complexity?
O(n) is linear time complexity—execution time grows proportionally with input size. If input doubles, time doubles. Examples: looping through array, finding element in unsorted array, printing all elements. O(n) is efficient for single-pass algorithms. Time grows linearly: 10 items = 10 operations, 100 items = 100 operations.
What is O(log n) time complexity?
O(log n) is logarithmic time complexity—execution time grows slowly as input increases. Examples: binary search, balanced binary tree operations. O(log n) is very efficient. With 1 million items, binary search needs only ~20 comparisons. Time grows logarithmically: 10 items = 3 operations, 100 items = 7 operations, 1000 items = 10 operations.