Array vs Linked List — Differences Explained Simply With Examples
Arrays and linked lists are the two most fundamental data structures. Arrays give you instant index access but slow insertions at the middle. Linked lists give you fast insertions but no random access. Knowing when to use each is a core developer skill — and the source of many interview questions. This guide explains the memory model, full operation complexity comparison, real code examples in JavaScript, and practical rules for when to choose each structure.
O(1)
array index access — directly computed from memory address
O(n)
array insert/delete at middle — must shift all elements
O(1)
linked list insert/delete with known node reference
O(n)
linked list search — must traverse from head
Memory Layout — The Fundamental Difference
The performance characteristics of arrays and linked lists all flow from one fundamental difference: how they store data in memory. An array allocates a single contiguous block of memory. A linked list allocates individual nodes anywhere in memory and connects them with pointers.
Why contiguous memory matters
An array is a contiguous block of memory — elements are stored side by side. arr[5] = memory_start + (5 × element_size) — computed in one instruction. A linked list scatters nodes anywhere in memory, connected by pointers. This single difference explains every performance trade-off between the two structures. Contiguous = fast index access + great cache performance. Scattered = flexible size + fast insertion.
Full Operation Complexity Comparison
| Item | Array | Linked List |
|---|---|---|
| Memory layout | Contiguous — one block | Scattered — each node anywhere in memory |
| Access by index | O(1) — direct: arr[5] = one computation | O(n) — traverse from head to position n |
| Search by value | O(n) unsorted; O(log n) sorted (binary search) | O(n) always — must traverse sequentially |
| Insert at front | O(n) — shift all elements one position right | O(1) — create node, update head pointer |
| Insert at end | O(1) amortized — dynamic arrays resize when needed | O(1) with tail pointer; O(n) singly without it |
| Insert at middle | O(n) — must shift all elements after insertion point | O(1) if you have the node reference; O(n) to find it |
| Delete at front | O(n) — shift all elements left | O(1) — update head pointer to head.next |
| Delete at middle | O(n) — must shift elements after deletion | O(1) doubly-linked if you have node; O(n) to find |
| Memory per element | Just the data (e.g., 4 bytes for int) | Data + 1 pointer (singly) or 2 pointers (doubly) |
| CPU cache performance | Excellent — spatial locality means cache hits | Poor — random memory access causes cache misses |
| Size flexibility | Expensive realloc for fixed arrays; O(1) amortized for dynamic | Free — just add or remove nodes, no reallocation |
When to Use Each Structure
Use Array When...
You access elements by index frequently. You iterate through elements sequentially (cache friendly). You need CPU cache performance for compute-heavy operations. Elements are fixed-size and count is known or bounded. You use binary search on sorted data.
Use Linked List When...
You insert or delete frequently at the beginning or middle. You don't need random access — only sequential traversal. Implementing stacks, queues, or deques at a lower level. Building LRU caches (doubly linked list + hash map combo). The size changes dramatically and unpredictably.
Dynamic Arrays in Practice
Modern "arrays" in languages are dynamic arrays: JavaScript Array, Python list, Java ArrayList, C++ vector. They're backed by resizable contiguous memory. O(1) amortized append (doubles capacity when full). O(n) insert/delete in middle. Use these as your default — reach for linked lists only when you have a specific need.
Linked List in Practice
Rarely used directly in application code. Usually used via higher-level abstractions: Java LinkedList, Python deque (doubly-linked). The LRU cache (doubly linked list + HashMap) is the most common real-world use. The concepts matter for system design interviews and understanding how deques work internally.
Code Implementation Comparison
const arr = [10, 20, 30, 40, 50];
// O(1) — random access (computed directly from memory address)
console.log(arr[2]); // → 30
// O(1) amortized — append at end (dynamic resize when capacity reached)
arr.push(60); // [10, 20, 30, 40, 50, 60]
// O(n) — insert at front (must shift all 5 elements right)
arr.unshift(5); // [5, 10, 20, 30, 40, 50, 60]
// O(n) — insert at middle (must shift elements after position 2)
arr.splice(2, 0, 15); // [5, 10, 15, 20, 30, 40, 50, 60]
// O(log n) — binary search on sorted array (use array's sorted property!)
function binarySearch(arr, target) {
let left = 0, right = arr.length - 1;
while (left <= right) {
const mid = (left + right) >>> 1; // unsigned right shift = fast floor division
if (arr[mid] === target) return mid;
if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}
// O(n) — linear search (unsorted array, no choice)
const idx = arr.indexOf(30); // → 4 (must check each element)class Node {
constructor(val) {
this.val = val;
this.next = null;
this.prev = null; // for doubly-linked list
}
}
class DoublyLinkedList {
constructor() {
this.head = null;
this.tail = null;
this.size = 0;
}
// O(1) — insert at head (no shifting!)
prepend(val) {
const node = new Node(val);
if (!this.head) {
this.head = this.tail = node;
} else {
node.next = this.head;
this.head.prev = node;
this.head = node;
}
this.size++;
}
// O(1) — insert at tail (with tail pointer)
append(val) {
const node = new Node(val);
if (!this.tail) {
this.head = this.tail = node;
} else {
node.prev = this.tail;
this.tail.next = node;
this.tail = node;
}
this.size++;
}
// O(1) — remove a specific node (given the node reference)
removeNode(node) {
if (node.prev) node.prev.next = node.next;
else this.head = node.next; // removing head
if (node.next) node.next.prev = node.prev;
else this.tail = node.prev; // removing tail
this.size--;
}
// O(n) — access by index (no direct computation possible)
get(index) {
let curr = this.head;
for (let i = 0; i < index; i++) {
curr = curr?.next;
}
return curr?.val; // undefined if index out of bounds
}
}
// Real use case: LRU Cache using DoublyLinkedList + HashMap
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.map = new Map(); // HashMap: key → node reference (O(1) access)
this.list = new DoublyLinkedList(); // Most recent at head, LRU at tail
}
get(key) {
if (!this.map.has(key)) return -1;
const node = this.map.get(key);
this.list.removeNode(node); // O(1) — we have node reference
this.list.prepend(node.val); // move to front (most recently used)
this.map.set(key, this.list.head);
return node.val;
}
}Cache Performance — Why Arrays Win in Practice
In real-world benchmarks, arrays often outperform linked lists even for operations where they have the same Big-O complexity. The reason is CPU cache behavior. Modern CPUs fetch memory in 64-byte cache lines. When you access arr[0], the CPU also loads arr[1] through arr[15] (for 4-byte ints) into the cache. Accessing arr[1] next is essentially free. For linked lists, each node is at a random memory address — every pointer follow likely causes a cache miss, which is 100–300× slower than a cache hit.
Choose array for iteration-heavy workloads