2011年1月1日 星期六

[ Data Structures with Java ] Section 4.3 : Analysis of Algorithms

Preface : 
A programmer judges code that executes an algorithm by its correctness, its ease of use, and its efficiency; in simple terms, "Does it work?", "Is is friendly?", "Is it fast?". Design and testing of code are the key factors that contribute to its correctness and ease of use. The efficiency is more difficult to pin down because the algorithm can be judged by different performance criteria. In this section, we describe system criteria, memory utilization criteria, and the internal complexity of the algorithm as three different ways to judge performance. The first two criteria depend on external resources and are more properly covered in a course on operation systems. We will focus on complexity criteria that depend on the design of the algorithm. We are particularly interested in algorithms that apply to an array or a collection of data in which the underlying storage structure might have a large number of elements. For these algorithms, we develop measures of efficiency that depend on n, the number of items in the collection. 

Algorithm Performance: Running Time Analysis 
For the study of data structures, we want performance criteria for algorithm that are based on their efficiency in accessing large collections of data. Traditionally, computer scientists use criteria measure the amount of computation involved in executing the algorithm. The criteria often include the number of comparison tests and the number of assignment statements used by the algorithm. These types of measures are independent of any particularly computer system and depend solely on the design and implementation of the algorithm. The criteria describe the computation complexity of an algorithm. When applied to a data structure, computational complexity measures the computational efficiency of the algorithm relative to n, the number of data items. The computational complexity of an algorithm indicates how fast it performs, so the term running time is often used. 
To analyze the running time of a data structures algorithm, we need to identify the key operations of the algorithm and determine how frequently they execute relative to the size of the data set. As a by-product of the analysis, we define a function T(n) that counts the frequency of the key operations in terms of n. We use the function to express the running time of the algorithm. In the next section, we develop procedures to estimate T(n) that result in a Big-O measure of the running time for the algorithm. Let use see how we can carry out the analysis for a variety of algorithms that include the finding of the minimum element in an array, the selection sort, and the search algorithms. 

Running Time: min() Method 
Locating the minimum element in an unordered array is a simple algorithm whose primary operation involves comparing elements. For an array with n elements, the algorithm takes the first element as the implied minimum and performs n-1 comparisons with others elements in the array. The function T(n) is a count of the number of comparisons : 
T(n) = n - 1 

Running Time: Selection Sort 
Trying to find a function T(n) that measures the running time of the selection sort is more involved because the algorithm requires a series of passes, with comparisons performed on each passes. For an array with n elements, the selection sort executes n-1 passes. Each pass locates the index of the minimum values in the unsorted sublist [pass,n). Let us use our understanding of min() to break out the number of comparisons for each passes. 
 

The total number of comparisons in the selection sot is the sum of the comparisons for passes from 0 to n-2. We record this sum as the value for function T(n) : 
T(n) = (n-1) + (n-2) + ... + 3 + 2 + 1 

For mathematics, we recognize that T(n) as the arithmetic series for terms from 1 to n-1. The series can be evaluated in terms of n : 
 

Running Time: Sequential Search 
The efficiency of some algorithms depends on the initial ordering of the data. This gives rise to the notion of "best-case", "worst-case" and "average-case" performance of the algorithm. The sequential search search exhibits all three cases. The base case occurs when the target is the first element in the sublist. In this situation, the search requires only one comparison which means T(n) = 1. 
The worst case occurs when the algorithm either locates the target as the last element of the sublist or finds no match. In this situation, we have T(n) = n. For average case, let use look at all possible situations. Finding a match with the first element requires one comparison (iteration) ; finding a match at the second element requires two comparison and forth. We have the total number of comparisons for all of the different possibilities is : 
1 + 2 + 3 + ... + n-1 + n = n * (n+1) /2 

The value T(n) represents the average or extected number of comparisons for the possible cases. 
T(n) = n*(n+1) / (2*n) = (n+1)/2 

Running Time: Binary Search 
The binary search is an algorithm is an algorithm that applies to ordered lists. On each iteration, it either finds a match and returns or searches a sublist with approximately one-half of the elements. In effect, it discards on half of the elements on each iteration, which will significantly cut down on the number of comparisons. The binary search is a case of separate "best-case", "worst-case" and "average-case" performance. In the best case, the target is at the midpoint of the list and the search require only one comparison. (T(n) = 1) 
The analysis of the "worst" case requires some intuitive analysis. When an iteration fails to find a match, the length of the sublist is reduced by a factor of 2. The sizes of the sublists are successively n, n/2, n/4, and so forth. The partition process terminates when the size of the sublist is less than 2. The following sequence depicts reduced sublist sizes : 
 

A little mathematical analysis yields a value of m. We use this to define T(n) for the worst case for which the target is not in the list. Start with m=1 and find the first m such that n/2^m < 2. It must be the case that n/2^(m-1) > 2 
Combining these two inequalities, we obtain : 
 

The logarithm base 2 is an increasing function of x. If we take the logarithm of the three terms in the inequality, the resulting values must remain in the same relative order : 
 

The values log2(n) is a real number that lies in the interval between the integers m and m+1 but cannot equal m+1. For instance, if n=25, log2(n) = 4.64385... The value for m is the whole number part 4, denoted by int(log2(25)). In general, 
 

The function T(n) is the number of iterations m : 
 

A sophisticated analysis shows that the average number of comparisons for a successful binary search is only slight less than the number in the worst case. 

Big-O Notation : 
In analyzing the running time of an algorithm, we define the function T(n) that identifies the activity that most affects overall performance. For instance, performing comparisons is the primary activity in searching and sorting algorithms as well as in locating the minimum value in an array. The function T(n) is a count of the number of times the algorithm performs the primary activity. 
In the previous section, we evaluated T(n) for different algorithms. The following is a sample of the results : 
 

The function T(n) is a measure of algorithm complexity that depends on the size of the list. It seeks to give an exact count or the average count of the number of comparisons for a list with precisely n elements. The function takes on more significance if we observe the change in T(n) for progressively larger values of n. The selection sort illustrates the point. Let us look at T(n) = n^2/2 - n/2 for n = 100, 1000 and 10,000 : 
 

As n gets larger, the value of T(n) is most influenced by the operation of squaring n. We say that n^2 is the dominant term. It provides an approximate measure of T(n) and a measure of change in T(n) relative to n. To get an approximate measure of T(n), we define an expression that involves the dominant term. The expression uses the notation O() and is called the Big-O measure for the algorithm. For the selection sort, the dominant term in T(n) is n^2 and the Big-O measure is O(n^2). We say that the measure of computation complexity for the selection sort is O(n^2). Equivalently, the selection sort has running time O(n^2). 

Common Orders of Magnitude : 
In the previous section, we identified algorithm whose Big-O estimates are O(1), O(n), O(log2(n)) and O(n^2). Each measure describes the effect of increasing the size of a list. For instance, doubling the size of list with the selection sort create a fourfold increase in the number of comparisons. The Big-O also define categories that allow a programmer to compare the relative efficiencies of algorithms. The following are the categories for the different measures along with sample algorithms : 
- Constant Time 

An algorithm is O(1) when its running time is independent of the number of items. The algorithm runs in constant time. For instance, the algorithm for finding the minimum value in an ordered array has running time O(1).

- Linear 
An algorithm is O(n) when its running time is proportional to the size of the list. Such as an algorithm is said to be linear. When the number of elements doubles in size, the number of operations double in size. For instance, finding the minimum value in an n element unordered list is O(n).

- Quadratic and Cubic 
Algorithms whose running time is O(n^2) are quadratic. Most simple sorting algorithms, such as the selection sort, are quadratic. They are practical only for relatively small values of n. An algorithm exhibits cubic time if its running time is O(n^3). The efficiency of such algorithms is generally poor because doubling the size of n cause an eightfold increase in the running time. Matrix multiplication involves computing n^3 products and so is an O(n^3) algorithm.

- Logarithmic 
The logarithm of n, base 2, is the exponent for which 2^exp = n. For instance, log2(2) = 1, log2(32) = 5 and so forth. When compared to the methods n and n^2, the function log2(n) grows very slowly. Some algorithm have Big-O measure O(log2(n)) and O(n log2(n)). They are termed logarithmic. The running time occur when the algorithm repeatedly subdivides the data into sublist whose size are 1/2, 1/4, 1/8 ... of original list size. The binary search has average and worst-case running time of O(log2(n)). The famous quicksort algorithm has running time O(n log2(n)), which is far better than the O(n^2) running time of the selection sort.

- Exponential 
Some algorithms deals with problems that involve searching through a large number of potential solutions before finding an answer. The algorithm often have running time O(a^n), a>1. This is termed exponential running time. Exponential algorithms are useful only for very small values of n. In Chapter 6, we will see that the recursive solution to the problem of generating Fibonacci numbers has exponential running time.

補充說明 : 

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...