csgeek
4 years ago1,000+ Views
O Notation, or more commonly known as "Big O"in computer science, is used to describe asymptotic complexity (aka efficiency) of a function. In computer science, big O notation is used to classify algorithm efficiency and how they respond to changes in input size n. -- Common O notation -- O(1) - constant O(log n) - logarithmic O(n) - linear O(nlogn) - quasi-linear O(n^2) - quadratic O(n^3) - cubic O(a^n) - exponential (for fixed a) So, if we are evaluating efficiency of an algorithm, which big O notation is more efficient? Firstly, you need to consider the following: 1. best case - how the algorithm will work on best possible input 2. worst case - how algorithm will work on worst input 3. average case - how it will run on most inputs -- Example -- You are given two functions, one which has an average case running time of O(n 2) and the other which has an average running time of O(nlogn) . In general, which would you choose? You would most likely choose the algorithm with an efficiency of O(nlogn) . For a large enough input size, an algorithm with O(nlogn) will run faster than an algorithm with O(n 2) . Wiki page: Example Source:http://www.sparknotes.com/cs/searching/efficiency/problems_2.html
1 Like
2 Shares