In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.[3] In analytic number theory, big O notation is often used to express a bound on the difference between an arithmetical function and a better understood approximation; a famous example of such a difference is the remainder term in the prime number theorem. Big O notation is also used in many other fields to provide similar estimates.

Case Average cases Worst cases
Data Structures Insert Delete Search Insert Delete Search
Array/Stack/Queue O(1) O(1) O(n) O(1) O(1) O(n)
Linked List O(1) O(1) O(n) O(1) O(1) O(n)
Doubly Linked List O(1) O(1) O(n) O(1) O(1) O(n)
Hash Table O(1) O(1) O(1) O(n) O(n) O(n)
Binary Search Tree O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n)