2018年8月10日 星期五

[ Py DS ] Ch2 - Introduction to NumPy (Part2)

Source From Here 
Computation on NumPy Arrays: Universal Functions 
Up until now, we have been discussing some of the basic nuts and bolts of NumPy; in the next few sections, we will dive into the reasons that NumPy is so important in the Python data science world. Namely, it provides an easy and flexible interface to optimized computation with arrays of data. 

Computation on NumPy arrays can be very fast, or it can be very slow. The key to making it fast is to use vectorized operations, generally implemented through NumPy’s universal functions (ufuncs). This section motivates the need for NumPy’s ufuncs, which can be used to make repeated calculations on array elements much more efficient. It then introduces many of the most common and useful arithmetic ufuncs available in the NumPy package. 

The Slowness of Loops 
Python’s default implementation (known as CPython) does some operations very slowly. This is in part due to the dynamic, interpreted nature of the language: the fact that types are flexible, so that sequences of operations cannot be compiled down to efficient machine code as in languages like C and Fortran. Recently there have been various attempts to address this weakness: well-known examples are the PyPy project, a just-in-time compiled implementation of Python; the Cython project, which converts Python code to compilable C code; and the Numba project, which converts snippets of Python code to fast LLVM bytecode. Each of these has its strengths and weaknesses, but it is safe to say that none of the three approaches has yet surpassed the reach and popularity of the standard CPython engine. 

The relative sluggishness of Python generally manifests itself in situations where many small operations are being repeated—for instance, looping over arrays to operate on each element. For example, imagine we have an array of values and we’d like to compute the reciprocal of each. A straightforward approach might look like this: 
  1. In [1]: import numpy as np  
  2.   
  3. In [2]: np.random.seed(0)  
  4.   
  5. In [3]: def compute_reciprocals(values):  
  6.    ...:     output = np.empty(len(values))  
  7.    ...:     for i in range(len(values)):  
  8.    ...:         output[i] = 1.0 / values[i]  
  9.    ...:     return output  
  10.    ...:  
  11.    ...:  
  12.   
  13. In [5]: values = np.random.randint(110, size=5)  
  14.   
  15. In [6]: compute_reciprocals(values)  
  16. Out[6]: array([0.166666671.        , 0.25      , 0.25      , 0.125     ])  
This implementation probably feels fairly natural to someone from, say, a C or Java background. But if we measure the execution time of this code for a large input, we see that this operation is very slow, perhaps surprisingly so! We’ll benchmark this with IPython’s %timeit magic: 
In [9]: %timeit compute_reciprocals(np.random.randint(1, 100, size=1000000))
1.95 s ± 14.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

It takes several seconds to compute these million operations and to store the result! When even cell phones have processing speeds measured in Giga-FLOPS (i.e., billions of numerical operations per second), this seems almost absurdly slow. It turns out that the bottleneck here is not the operations themselves, but the type-checking and function dispatches that CPython must do at each cycle of the loop. Each time the reciprocal is computed, Python first examines the object’s type and does a dynamic lookup of the correct function to use for that type. If we were working in compiled code instead, this type specification would be known before the code executes and the result could be computed much more efficiently. 

Introducing UFuncs 
For many types of operations, NumPy provides a convenient interface into just this kind of statically typed, compiled routine. This is known as a vectorized operation. You can accomplish this by simply performing an operation on the array, which will then be applied to each element. This vectorized approach is designed to push the loop into the compiled layer that underlies NumPy, leading to much faster execution. 

Compare the results of the following two: 
In [10]: values = np.random.randint(1, 100, size=1000000)

In [11]: %timeit compute_reciprocals(values)
1.98 s ± 8.35 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [12]: %timeit 1 / values
3.78 ms ± 7.81 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Looking at the execution time for our big array, we see that it completes orders of magnitude faster than the Python loop. Vectorized operations in NumPy are implemented via ufuncs, whose main purpose is to quickly execute repeated operations on values in NumPy arrays. Ufuncs are extremely flexible—before we saw an operation between a scalar and an array, but we can also operate between two arrays: 
In [14]: np.arange(5) / np.arange(1, 6)
Out[14]: array([0. , 0.5 , 0.66666667, 0.75 , 0.8 ])

And ufunc operations are not limited to one-dimensional arrays—they can act on multidimensional arrays as well: 
In [3]: x = np.arange(9).reshape((3, 3))

In [4]: 2 ** x
Out[4]:
array([[ 1, 2, 4],
[ 8, 16, 32],
[ 64, 128, 256]], dtype=int32)

Computations using vectorization through ufuncs are nearly always more efficient than their counterpart implemented through Python loops, especially as the arrays grow in size. Any time you see such a loop in a Python script, you should consider whether it can be replaced with a vectorized expression. 

Exploring NumPy’s UFuncs 
Ufuncs exist in two flavors: unary ufuncs, which operate on a single input, and binary ufuncs, which operate on two inputs. We’ll see examples of both these types of functions here. 

Array arithmetic 
NumPy’s ufuncs feel very natural to use because they make use of Python’s native arithmetic operators. The standard addition, subtraction, multiplication, and division can all be used: 
In [4]: x = np.arange(4)

In [5]: print("x = ", x)
x = [0 1 2 3]

In [7]: print("x + 5 = ", x + 5)
x + 5 = [5 6 7 8]

In [8]: print("x - 5 = ", x - 5)
x - 5 = [-5 -4 -3 -2]

In [9]: print("x * 2 = ", x * 2)
x * 2 = [0 2 4 6]

In [10]: print("x / 2 = ", x / 2)
x / 2 = [0. 0.5 1. 1.5]

In [11]: print("x // 2 = ", x // 2)
x // 2 = [0 0 1 1]

In addition, these can be strung together however you wish, and the standard order of operations is respected: 
In [14]: -(0.5 * x + 1) ** 2
Out[14]: array([-1. , -2.25, -4. , -6.25])

All of these arithmetic operations are simply convenient wrappers around specific functions built into NumPy; for example, the + operator is a wrapper for the add function: 
In [15]: np.add(x, 2)
Out[15]: array([2, 3, 4, 5])

Below table lists the arithmetic operators implemented in NumPy: 


Absolute value 
Just as NumPy understands Python’s built-in arithmetic operators, it also understands Python’s built-in absolute value function: 
In [16]: abs(np.array([-2, -1, 0, 1, 2]))
Out[16]: array([2, 1, 0, 1, 2])

The corresponding NumPy ufunc is np.absolute, which is also available under the alias np.abs 

This ufunc can also handle complex data, in which the absolute value returns the magnitude: 
In [17]: np.abs(np.array([ 3 - 4j, 4 - 3j, 2 + 0j, 0 + 1j]))
Out[17]: array([5., 5., 2., 1.])

Trigonometric functions 
NumPy provides a large number of useful ufuncs, and some of the most useful for the 
data scientist are the trigonometric functions. We’ll start by defining an array of angles: 
In [18]: theta = np.linspace(0, np.pi, 3)

Now we can compute some trigonometric functions on these values: 
In [19]print("theta = ", theta)
theta = [0. 1.57079633 3.14159265]

In [20]: print("sin(theta) = ", np.sin(theta))
sin(theta) = [0.0000000e+00 1.0000000e+00 1.2246468e-16]

In [21]: print("cos(theta) = ", np.cos(theta))
cos(theta) = [ 1.000000e+00 6.123234e-17 -1.000000e+00]

In [22]: print("tan(theta) = ", np.tan(theta))
tan(theta) = [ 0.00000000e+00 1.63312394e+16 -1.22464680e-16]

The values are computed to within machine precision, which is why values that 
should be zero do not always hit exactly zero. Inverse trigonometric functions are also available with np.arcsinnp.arcos and np.arctan etc 

Exponents and logarithms 
Another common type of operation available in a NumPy ufunc are the exponentials: 
In [23]: x = [1, 2, 3]

In [24]: print("x = ", x)
x = [1, 2, 3]

In [25]: print("e^x = ", np.exp(x))
e^x = [ 2.71828183 7.3890561 20.08553692]

In [26]: print("2^x = ", np.exp2(x))
2^x = [2. 4. 8.]

In [27]: print("3^x = ", np.power(3, x))
3^x = [ 3 9 27]

The inverse of the exponentials, the logarithms, are also available. The basic np.log gives the natural logarithm; if you prefer to compute the base-2 logarithm or the base-10 logarithm, these are available as well: 
In [29]: print("x = ", x)
x = [1, 2, 4, 10]

In [30]: print("ln(x) = ", np.log(x))
ln(x) = [0. 0.69314718 1.38629436 2.30258509]

In [31]: print("log2(x) = ", np.log2(x))
log2(x) = [0. 1. 2. 3.32192809]

In [32]: print("log10(x) = ", np.log10(x))
log10(x) = [0. 0.30103 0.60205999 1. ]

There are also some specialized versions that are useful for maintaining precision with very small input: 
In [33]: x = [0, 0.001, 0.01, 0.1]

In [34]: print("exp(x) - 1 = ", np.expm1(x))
exp(x) - 1 = [0. 0.0010005 0.01005017 0.10517092]

In [35]: print("log(1 + x) = ", np.log1p(x))
log(1 + x) = [0. 0.0009995 0.00995033 0.09531018]

When x is very small, these functions give more precise values than if the raw np.log or np.exp were used. 

Specialized ufuncs 
NumPy has many more ufuncs available, including hyperbolic trig functions, bitwise arithmetic, comparison operators, conversions from radians to degrees, rounding and remainders, and much more. A look through the NumPy documentation reveals a lot of interesting functionality. Another excellent source for more specialized and obscure ufuncs is the submodule scipy.special. If you want to compute some obscure mathematical function on your data, chances are it is implemented in scipy.special

Advanced Ufunc Features 
Many NumPy users make use of ufuncs without ever learning their full set of features. We’ll outline a few specialized features of ufuncs here. 

Specifying output 
For large calculations, it is sometimes useful to be able to specify the array where the result of the calculation will be stored. Rather than creating a temporary array, you can use this to write computation results directly to the memory location where you’d like them to be. For all ufuncs, you can do this using the out argument of the function: 
In [36]: x = np.arange(5)

In [37]: y = np.empty(5)

In [38]: np.multiply(x, 10, out=y)
Out[38]: array([ 0., 10., 20., 30., 40.])

In [39]: print(y) # The computation result is saved into y array
[ 0. 10. 20. 30. 40.]

This can even be used with array views. For example, we can write the results of a computation to every other element of a specified array: 
In [40]: y = np.zeros(10)

In [41]: np.power(2, x, out=y[::2])
Out[41]: array([ 1., 2., 4., 8., 16.])

In [42]: print(y)
[ 1. 0. 2. 0. 4. 0. 8. 0. 16. 0.]

If we had instead written y[::2] = 2 ** xthis would have resulted in the creation of a temporary array to hold the results of 2 ** x, followed by a second operation copying those values into the y array. This doesn’t make much of a difference for such a small computation, but for very large arrays the memory savings from careful use of the out argument can be significant. 

Aggregates 
For binary ufuncs, there are some interesting aggregates that can be computed directly from the object. For example, if we’d like to reduce an array with a particular operation, we can use the reduce method of any ufunc. A reduce repeatedly applies a given operation to the elements of an array until only a single result remains. For example, calling reduce on the add ufunc returns the sum of all elements in the array: 
In [43]: x = np.arange(1, 6)

In [44]: np.add.reduce(x) # 1 + 2 = 3, 3 + 3 = 6, 6 + 4 = 10, 10 + 5 = 15
Out[44]: 15

If we’d like to store all the intermediate results of the computation, we can instead use accumulate
In [46]: np.add.accumulate(x)
Out[46]: array([ 1, 3, 6, 10, 15], dtype=int32)

In [47]: np.multiply.accumulate(x)
Out[47]: array([ 1, 2, 6, 24, 120], dtype=int32)

Outer products 
Finally, any ufunc can compute the output of all pairs of two different inputs using the outer method. This allows you, in one line, to do things like create a multiplication table: 
In [49]: x = np.arange(1, 4)
In [51]: np.multiply.outer(x, x)
Out[51]:
array([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])

Aggregations: Min, Max, and Everything in Between 
Often when you are faced with a large amount of data, a first step is to compute summary statistics for the data in question. Perhaps the most common summary statistics are the mean and standard deviation, which allow you to summarize the “typical” values in a dataset, but other aggregates are useful as well (the sum, product, median, minimum and maximum, quantiles, etc.). NumPy has fast built-in aggregation functions for working on arrays; we’ll discuss and demonstrate some of them here. 

Summing the Values in an Array 
As a quick example, consider computing the sum of all values in an array. Python itself can do this using the built-in sum function: 
In [21]: L = np.random.random(100)

In [22]: sum(L)
Out[22]: 56.853380267550804

The syntax is quite similar to that of NumPy’s sum function, and the result is the same in the simplest case: 
In [23]: np.sum(L)
Out[23]: 56.85338026755079

However, because it executes the operation in compiled code, NumPy’s version of the operation is computed much more quickly: 
In [25]: %timeit sum(big_array)
11.4 ms ± 46.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [26]: %timeit np.sum(big_array)
57.6 µs ± 272 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Be careful, though: the sum function and the np.sum function are not identical, which can sometimes lead to confusion! In particular, their optional arguments have different meanings, and np.sum is aware of multiple array dimensions, as we will see in the following section. 

Minimum and Maximum 
Similarly, Python has built-in min and max functions, used to find the minimum value and maximum value of any given array: 
In [29]: min(big_array), max(big_array)
Out[29]: (1.16336967359576e-06, 0.9999692114518608)

NumPy’s corresponding functions have similar syntax, and again operate much more quickly: 
In [30]: %timeit min(big_array)
6.98 ms ± 24 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [31]: %timeit np.min(big_array)
72.6 µs ± 547 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

For min, max, sum, and several other NumPy aggregates, a shorter syntax is to use methods of the array object itself: 
In [32]: print(big_array.min(), big_array.max(), big_array.sum())
1.16336967359576e-06 0.9999692114518608 50100.27947559156

Multidimensional aggregates 
One common type of aggregation operation is an aggregate along a row or column. Say you have some data stored in a two-dimensional array: 
In [33]: M = np.random.random((3, 4))

In [34]: print(M)
[[0.13148041 0.74518121 0.25132166 0.66987808]
[0.23725206 0.47582424 0.0965371 0.65976689]
[0.63728797 0.94724141 0.00963971 0.31945033]]

By default, each NumPy aggregation function will return the aggregate over the entire array: 
In [35]: M.sum()
Out[35]: 5.180861073738387

Aggregation functions take an additional argument specifying the axis along which the aggregate is computed. For example, we can find the minimum value within each column by specifying axis=0
In [36]: M.min(axis=0)
Out[36]: array([0.13148041, 0.47582424, 0.00963971, 0.31945033])

The function returns four values, corresponding to the four columns of numbers. Similarly, we can find the maximum value within each row: 
In [37]: M.max(axis=1)
Out[37]: array([0.74518121, 0.65976689, 0.94724141])

The way the axis is specified here can be confusing to users coming from other languages. The axis keyword specifies the dimension of the array that will be collapsed, rather than the dimension that will be returned. So specifying axis=0 means that the first axis will be collapsed: for two-dimensional arrays, this means that values within each column will be aggregated

Other aggregation functions 
NumPy provides many other aggregation functions, but we won’t discuss them in detail here. Additionally, most aggregates have a NaN-safe counterpart that computes the result while ignoring missing values, which are marked by the special IEEE floating-point NaN value. Some of these NaN-safe functions were not added until NumPy 1.8, so they will not be available in older NumPy versions. 

Below table provides a list of useful aggregation functions available in NumPy: 


Example: What Is the Average Height of US Presidents? 
Aggregates available in NumPy can be extremely useful for summarizing a set of values. As a simple example, let’s consider the heights of all US presidents. This data is available in the file president_heights.csv, which is a simple comma-separated list of labels and values: 
  1. In [1]: import pandas as pd  
  2.   
  3. In [2]: import io  
  4.   
  5. In [3]: import requests  
  6.   
  7. In [4]: url = 'https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/master/notebooks/data/president_heights.csv'  
  8.   
  9. In [5]: s = requests.get(url).content  
  10.   
  11. In [6]: c = pd.read_csv(io.StringIO(s.decode('utf-8')))  
  12.   
  13. In [7]: c.head(1)  
  14. Out[7]:  
  15.    order               name  height(cm)  
  16. 0      1  George Washington         189  
  17.   
  18. In [8]: c.head(3)  
  19. Out[8]:  
  20.    order               name  height(cm)  
  21. 0      1  George Washington         189  
  22. 1      2         John Adams         170  
  23. 2      3   Thomas Jefferson         189  
  24.   
  25. In [12]: import numpy as np  
  26.   
  27. In [13]: heights = np.array(c['height(cm)'])  
  28. In [15]: print(heights)  
  29. [189 170 189 163 183 171 185 168 173 183 173 173 175 178 183 193 178 173  
  30. 174 183 183 168 170 178 182 180 183 178 182 188 175 179 183 193 182 183  
  31. 177 185 188 188 182 185]  
Now that we have this data array, we can compute a variety of summary statistics: 
In [16]: print('Mean height : ', heights.mean())
Mean height : 179.73809523809524

In [17]: print('Standard deviation : ', heights.std())
Standard deviation : 6.931843442745892

In [18]: print('Minimum height : ', heights.min())
Minimum height : 163

In [19]: print('Maximum height : ', heights.max())
Maximum height : 193

Note that in each case, the aggregation operation reduced the entire array to a single summarizing value, which gives us information about the distribution of values. We may also wish to compute quantiles: 
In [12]: print("25th percentile: ", np.percentile(heights, 25))
25th percentile: 174.25

In [13]: print("Median: ", np.median(heights))
Median: 182.0

In [14]: print("75th percentile: ", np.percentile(heights, 75))
75th percentile: 183.0

We see that the median height of US presidents is 182 cm, or just shy of six feet. Of course, sometimes it’s more useful to see a visual representation of this data, which we can accomplish using tools in Matplotlib (we’ll discuss Matplotlib more fully in Chapter 4). For example, this code generates the chart shown in Figure 2-3: 
  1. %matplotlib inline  
  2. import matplotlib.pyplot as plt  
  3. #import seaborn; seabon.set()  # set plot style  
  4.   
  5. plt.hist(heights)  
  6. plt.title('Hight Distribution of US Presidents')  
  7. plt.xlabel('height (cm)')  
  8. plt.xlabel('number')  
Figure 2-3. Histogram of presidential heights 

These aggregates are some of the fundamental pieces of exploratory data analysis that we’ll explore in more depth in later chapters of the book. 

Computation on Arrays: Broadcasting 
We saw in the previous section how NumPy’s universal functions can be used to vectorize operations and thereby remove slow Python loops. Another means of vectorizing operations is to use NumPy’s broadcasting functionality. Broadcasting is simply a set of rules for applying binary ufuncs (addition, subtraction, multiplication, etc.) on arrays of different sizes. 

Introducing Broadcasting 
Recall that for arrays of the same size, binary operations are performed on an element-by-element basis: 
  1. import numpy as np  
  2.   
  3. a = np.array([012])  
  4. b = np.array([555])  
  5.   
  6. a + b  # Out[2]: array([567])  
Broadcasting allows these types of binary operations to be performed on arrays of different sizes—for example, we can just as easily add a scalar (think of it as a zerodimensional array) to an array: 
In[3]: a + 5
Out[3]: array([5, 6, 7])

We can think of this as an operation that stretches or duplicates the value 5 into the array [5, 5, 5], and adds the results. The advantage of NumPy’s broadcasting is that this duplication of values does not actually take place, but it is a useful mental model as we think about broadcasting. We can similarly extend this to arrays of higher dimension. Observe the result when we add a one-dimensional array to a two-dimensional array: 
In[4]: M = np.ones((3, 3))
In[5]: M + a
Out[5]: array([[ 1., 2., 3.],
[ 1., 2., 3.],
[ 1., 2., 3.]])

Here the one-dimensional array a is stretched, or broadcast, across the second dimension in order to match the shape of M. While these examples are relatively easy to understand, more complicated cases can involve broadcasting of both arrays. Consider the following example: 


Just as before we stretched or broadcasted one value to match the shape of the other, here we’ve stretched both a and b to match a common shape, and the result is a twodimensional array! The geometry of these examples is visualized in Figure 2-4:


The light boxes represent the broadcasted values: again, this extra memory is not actually allocated in the course of the operation, but it can be useful conceptually to imagine that it is. 

Rules of Broadcasting 
Broadcasting in NumPy follows a strict set of rules to determine the interaction between the two arrays: 
• Rule 1: If the two arrays differ in their number of dimensions, the shape of the one with fewer dimensions is padded with ones on its leading (left) side.
• Rule 2: If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape.
• Rule 3: If in any dimension the sizes disagree and neither is equal to 1, an error is raised.

To make these rules clear, let’s consider a few examples in detail. 

Broadcasting example 1 
Let’s look at adding a two-dimensional array to a one-dimensional array: 
In [2]: M = np.ones((2, 3))
In [3]: a = np.arange(3)

Let’s consider an operation on these two arrays. The shapes of the arrays are: 
M.shape = (2, 3)
a.shape = (3,)

We see by rule 1 that the array a has fewer dimensions, so we pad it on the left with ones: 
M.shape -> (2, 3)
a.shape -> (1, 3)

By rule 2, we now see that the first dimension disagrees, so we stretch this dimension to match: 
M.shape -> (2, 3)
a.shape -> (2, 3)

The shapes match, and we see that the final shape will be (2, 3): 
In [4]: M + a
Out[4]:
array([[1., 2., 3.],
[1., 2., 3.]])

Broadcasting example 2 
Let’s take a look at an example where both arrays need to be broadcast: 
In [7]: a = np.arange(3).reshape((3, 1))
In [8]: b = np.arange(3)

Again, we’ll start by writing out the shape of the arrays: 
a.shape = (3, 1)
b.shape = (3,)

Rule 1 says we must pad the shape of b with ones: 
a.shape -> (3, 1)
b.shape -> (1, 3)

And rule 2 tells us that we upgrade each of these ones to match the corresponding 
size of the other array: 
a.shape -> (3, 3)
b.shape -> (3, 3)

Because the result matches, these shapes are compatible. We can see this here: 
  1. In [9]: a + b  
  2. Out[9]:  
  3. array([[012],  
  4.        [123],  
  5.        [234]])  
Broadcasting example 3 
Now let’s take a look at an example in which the two arrays are not compatible: 
In [12]: M = np.ones((3, 2))
In [13]: a = np.arange(3)

This is just a slightly different situation than in the first example: the matrix M is transposed. How does this affect the calculation? The shapes of the arrays are: 
M.shape = (3, 2)
a.shape = (3,)

Again, rule 1 tells us that we must pad the shape of a with ones: 
M.shape -> (3, 2)
a.shape -> (1, 3)

By rule 2, the first dimension of a is stretched to match that of M: 
M.shape -> (3, 2)
a.shape -> (3, 3)

Now we hit rule 3—the final shapes do not match, so these two arrays are incompatible, as we can observe by attempting this operation: 
In [14]: M + a
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
----> 1 M + a

ValueError: operands could not be broadcast together with shapes (3,2) (3,)

Note the potential confusion here: you could imagine making a and M compatible by, say, padding a’s shape with ones on the right rather than the left. But this is not how the broadcasting rules work! That sort of flexibility might be useful in some cases, but it would lead to potential areas of ambiguity. If right-side padding is what you’d like, you can do this explicitly by reshaping the array: 
  1. In [16]: a[:, np.newaxis].shape  
  2. Out[16]: (31)  
  3.   
  4. In [17]: M + a[:, np.newaxis]  
  5. Out[17]:  
  6. array([[1., 1.],  
  7.        [2., 2.],  
  8.        [3., 3.]])  
Also note that while we’ve been focusing on the + operator here, these broadcasting rules apply to any binary ufunc. For example, here is the logaddexp(a, b) function, which computes log(exp(a) + exp(b)) with more precision than the naive approach: 
  1. In[16]: np.logaddexp(M, a[:, np.newaxis])  
  2. Out[16]: array([[ 1.313261691.31326169],  
  3. 1.693147181.69314718],  
  4. 2.313261692.31326169]])  
Broadcasting in Practice 
Broadcasting operations form the core of many examples we’ll see throughout this book. We’ll now take a look at a couple simple examples of where they can be useful. 

Centering an array 
In the previous section, we saw that ufuncs allow a NumPy user to remove the need to explicitly write slow Python loops. Broadcasting extends this ability. One commonly seen example is centering an array of data. Imagine you have an array of 10 observations, each of which consists of 3 values. Using the standard convention (see “Data Representation in Scikit-Learn” on page 343), we’ll store this in a 10×3 array: 
In [18]: X = np.random.random((10, 3))

We can compute the mean of each feature using the mean aggregate across the first dimension: 
In [19]: Xmean = X.mean(0)

In [20]: Xmean
Out[20]: array([0.49139617, 0.50098786, 0.49403384])

And now we can center the X array by subtracting the mean (this is a broadcasting operation): 
In [21]: X_centered = X - Xmean

To double-check that we’ve done this correctly, we can check that the centered array has near zero mean: 
In [22]: X_centered.mean(0)
Out[22]: array([-4.44089210e-17, -5.55111512e-17, 0.00000000e+00])

To within-machine precision, the mean is now zero. 

Plotting a two-dimensional function 
One place that broadcasting is very useful is in displaying images based on twodimensional functions. If we want to define a function z = f(x, y), broadcasting can be used to compute the function across the grid: 

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...