2018年11月4日 星期日

[ Py DS ] Ch3 - Data Manipulation with Pandas (Part3)

Source From Here 
Combining Datasets: Concat and Append 
Some of the most interesting studies of data come from combining different data sources. These operations can involve anything from very straightforward concatenation of two different datasets, to more complicated database-style joins and merges that correctly handle any overlaps between the datasets. Series and DataFrames are built with this type of operation in mind, and Pandas includes functions and methods that make this sort of data wrangling fast and straightforward. Here we’ll take a look at simple concatenation of Series and DataFrames with the pd.concat function; later we’ll dive into more sophisticated in-memory merges and joins implemented in Pandas. 

We begin with the standard imports: 
  1. In [1]: import pandas as pd  
  2. In [2]: import numpy as np  
For convenience, we’ll define this function, which creates a DataFrame of a particular form that will be useful below: 
  1. import numpy as np  
  2. import pandas as pd  
  3.   
  4. def make_df(cols, ind):  
  5.     """Quickly make a DataFrame"""  
  6.     data = {c: [str(c) + str(i) for i in ind] for c in cols}  
  7.     return pd.DataFrame(data, ind)  
  8.     # example DataFrame  
  9.       
  10. make_df('ABC', range(3))  
which is: 
  1. In [4]: make_df('ABC', range(3))  
  2. Out[4]:  
  3.     A   B   C  
  4. 0  A0  B0  C0  
  5. 1  A1  B1  C1  
  6. 2  A2  B2  C2  
Recall: Concatenation of NumPy Arrays 
Concatenation of Series and DataFrame objects is very similar to concatenation of NumPy arrays, which can be done via the np.concatenate function as discussed in “The Basics of NumPy Arrays” on page 42. Recall that with it, you can combine the contents of two or more arrays into a single array: 
  1. In [15]: x = [123]  
  2. In [16]: y = [456]  
  3. In [17]: z = [789]  
  4.   
  5. In [18]: np.concatenate([x, y, z])  
  6. Out[18]: array([123456789])  
The first argument is a list or tuple of arrays to concatenate. Additionally, it takes an axis keyword that allows you to specify the axis along which the result will be concatenated: 
  1. In [19]: x = [[12], [34]]  
  2.   
  3. In [20]: np.concatenate([x, x], axis=1)  
  4. Out[20]:  
  5. array([[1212],  
  6.        [3434]])  
  7.   
  8. In [21]: np.concatenate([x, x], axis=0)  
  9. Out[21]:  
  10. array([[12],  
  11.        [34],  
  12.        [12],  
  13.        [34]])  
Simple Concatenation with pd.concat 
Pandas has a function, pd.concat(), which has a similar syntax to np.concatenate but contains a number of options that we’ll discuss momentarily: 
  1. r'''  
  2. Concatenate pandas objects along a particular axis with optional set logic along the other axes.  
  3.   
  4. Can also add a layer of hierarchical indexing on the concatenation axis, which may be useful if the labels are the same (or overlapping) on the passed axis number.'''  
  5. pandas.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False, sort=None, copy=True)  
pd.concat() can be used for a simple concatenation of Series or DataFrame objects, just as np.concatenate can be used for simple concatenations of arrays: 
  1. In [22]: ser1 = pd.Series(['A''B''C'], index=[123])  
  2. In [23]: ser2 = pd.Series(['D''E''F'], index=[456])  
  3.   
  4. In [24]: pd.concat([ser1, ser2])  
  5. Out[24]:  
  6. 1    A  
  7. 2    B  
  8. 3    C  
  9. 4    D  
  10. 5    E  
  11. 6    F  
  12. dtype: object  
It also works to concatenate higher-dimensional objects, such as DataFrames: 
  1. In [25]: df1 = make_df('AB', [12])  
  2. In [26]: df2 = make_df('AB', [34])  
  3.   
  4. In [27]: print(df1)  
  5.     A   B  
  6. 1  A1  B1  
  7. 2  A2  B2  
  8.   
  9. In [28]: print(df2)  
  10.     A   B  
  11. 3  A3  B3  
  12. 4  A4  B4  
  13.   
  14. In [29]: pd.concat([df1, df2])  
  15. Out[29]:  
  16.     A   B  
  17. 1  A1  B1  
  18. 2  A2  B2  
  19. 3  A3  B3  
  20. 4  A4  B4  
By default, the concatenation takes place row-wise within the DataFrame (i.e., axis=0). Like np.concatenate, pd.concat allows specification of an axis along which concatenation will take place. Consider the following example: 
  1. In [30]: df3 = make_df('AB', [01])  
  2. In [31]: df4 = make_df('CD', [01])  
  3.   
  4. In [32]: print(df3)  
  5.     A   B  
  6. 0  A0  B0  
  7. 1  A1  B1  
  8.   
  9. In [33]: print(df4)  
  10.     C   D  
  11. 0  C0  D0  
  12. 1  C1  D1  
  13.   
  14. In [35]: pd.concat([df3, df4], axis=1)  
  15. Out[35]:  
  16.     A   B   C   D  
  17. 0  A0  B0  C0  D0  
  18. 1  A1  B1  C1  D1  
Duplicate indices 
One important difference between np.concatenate and pd.concat is that Pandas concatenation preserves indices, even if the result will have duplicate indices! Consider this simple example: 
  1. In [36]: x = make_df('AB', [01])  
  2. In [37]: y = make_df('AB', [23])  
  3.   
  4. In [38]: y.index = x.index # make duplicate indices  
  5.   
  6. In [39]: print(x); print(y)  
  7.     A   B  
  8. 0  A0  B0  
  9. 1  A1  B1  
  10.     A   B  
  11. 0  A2  B2  
  12. 1  A3  B3  
  13.   
  14. In [40]: pd.concat([x, y])  
  15. Out[40]:  
  16.     A   B  
  17. 0  A0  B0  
  18. 1  A1  B1  
  19. 0  A2  B2  
  20. 1  A3  B3  
Notice the repeated indices in the result. While this is valid within DataFrames, the outcome is often undesirable. pd.concat() gives us a few ways to handle it. 

Catching the repeats as an error. If you’d like to simply verify that the indices in the result of pd.concat() do not overlap, you can specify the verify_integrity flag. With this set to True, the concatenation will raise an exception if there are duplicate indices. Here is an example, where for clarity we’ll catch and print the error message: 
  1. In [41]: try:  
  2.     ...:     pd.concat([x, y], verify_integrity=True)  
  3.     ...: except ValueError as e:  
  4.     ...:     print("ValueError: {}".format(e))  
  5.     ...:  
  6. ValueError: Indexes have overlapping values: Int64Index([01], dtype='int64')  
Ignoring the index. Sometimes the index itself does not matter, and you would prefer it to simply be ignored. You can specify this option using the ignore_index flag. With this set to True, the concatenation will create a new integer index for the resulting Series
  1. In [42]: print(x); print(y); print(pd.concat([x, y], ignore_index=True))  
  2.     A   B  
  3. 0  A0  B0  
  4. 1  A1  B1  
  5.     A   B  
  6. 0  A2  B2  
  7. 1  A3  B3  
  8.     A   B  
  9. 0  A0  B0  
  10. 1  A1  B1  
  11. 2  A2  B2  
  12. 3  A3  B3  
Adding MultiIndex keys. Another alternative is to use the keys option to specify a label for the data sources; the result will be a hierarchically indexed series containing the data: 
  1. In [43]: print(x); print(y); print(pd.concat([x, y], keys=['x''y']))  
  2.     A   B  
  3. 0  A0  B0  
  4. 1  A1  B1  
  5.     A   B  
  6. 0  A2  B2  
  7. 1  A3  B3  
  8.       A   B  
  9. 0  A0  B0  
  10.   1  A1  B1  
  11. 0  A2  B2  
  12.   1  A3  B3  
The result is a multiply indexed DataFrame, and we can use the tools discussed in “Hierarchical Indexing” on page 128 to transform this data into the representation we’re interested in. 

Concatenation with joins 
In the simple examples we just looked at, we were mainly concatenating DataFrames with shared column names. In practice, data from different sources might have different sets of column names, and pd.concat offers several options in this case. Consider the concatenation of the following two DataFrames, which have some (but not all!) columns in common: 
  1. In [44]: df5 = make_df('ABC', [12])  
  2. In [45]: df6 = make_df('BCD', [34])  
  3.   
  4. In [46]: print(df5); print(df6); print(pd.concat([df5, df6]))  
  5.     A   B   C  
  6. 1  A1  B1  C1  
  7. 2  A2  B2  C2  
  8.     B   C   D  
  9. 3  B3  C3  D3  
  10. 4  B4  C4  D4  
  11. C:\Users\johnlee\AppData\Local\Programs\Python\Python37\Scripts\ipython:1: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version  
  12. of pandas will change to not sort by default.  
  13.   
  14. To accept the future behavior, pass 'sort=False'.  
  15.   
  16. To retain the current behavior and silence the warning, pass 'sort=True'.  
  17.   
  18.      A   B   C    D  
  19. 1   A1  B1  C1  NaN  
  20. 2   A2  B2  C2  NaN  
  21. 3  NaN  B3  C3   D3  
  22. 4  NaN  B4  C4   D4  
By default, the entries for which no data is available are filled with NA values. To change this, we can specify one of several options for the join and join_axes parameters of the concatenate function. By default, the join is a union of the input columns (join='outer'), but we can change this to an intersection of the columns using join='inner'
  1. In [47]: pd.concat([df5, df6], join='inner')  
  2. Out[47]:  
  3.     B   C  
  4. 1  B1  C1  
  5. 2  B2  C2  
  6. 3  B3  C3  
  7. 4  B4  C4  
Another option is to directly specify the index of the remaining colums using the join_axes argument, which takes a list of index objects. Here we’ll specify that the returned columns should be the same as those of the first input: 
  1. In [48]: pd.concat([df5, df6], join_axes=[df5.columns])  
  2. Out[48]:  
  3.      A   B   C  
  4. 1   A1  B1  C1  
  5. 2   A2  B2  C2  
  6. 3  NaN  B3  C3  
  7. 4  NaN  B4  C4  
  8.   
  9. In [49]: df5.columns  
  10. Out[49]: Index(['A''B''C'], dtype='object')  
The combination of options of the pd.concat function allows a wide range of possible behaviors when you are joining two datasets; keep these in mind as you use these tools for your own data. 

The append() method 
Because direct array concatenation is so common, Series and DataFrame objects have an append method that can accomplish the same thing in fewer keystrokes. For example, rather than calling pd.concat([df1, df2]), you can simply call df1.append(df2)
  1. In [50]: print(df1); print(df2); print(df1.append(df2))  
  2.     A   B  
  3. 1  A1  B1  
  4. 2  A2  B2  
  5.     A   B  
  6. 3  A3  B3  
  7. 4  A4  B4  
  8.     A   B  
  9. 1  A1  B1  
  10. 2  A2  B2  
  11. 3  A3  B3  
  12. 4  A4  B4  
Keep in mind that unlike the append() and extend() methods of Python lists, the append() method in Pandas does not modify the original object—instead, it creates a new object with the combined data. It also is not a very efficient method, because it involves creation of a new index and data buffer. Thus, if you plan to do multiple append operations, it is generally better to build a list of DataFrames and pass them all at once to the concat() function. 

In the next section, we’ll look at another more powerful approach to combining data from multiple sources, the database-style merges/joins implemented in pd.merge. For more information on concat(), append(), and related functionality, see the “Merge, Join, and Concatenate” section of the Pandas documentation. 

Combining Datasets: Merge and Join 
One essential feature offered by Pandas is its high-performance, in-memory join and merge operations. If you have ever worked with databases, you should be familiar with this type of data interaction. The main interface for this is the pd.merge function, and we’ll see a few examples of how this can work in practice. 

Relational Algebra 
The behavior implemented in pd.merge() is a subset of what is known as relational algebra, which is a formal set of rules for manipulating relational data, and forms the conceptual foundation of operations available in most databases. The strength of the relational algebra approach is that it proposes several primitive operations, which become the building blocks of more complicated operations on any dataset. With this lexicon of fundamental operations implemented efficiently in a database or other program, a wide range of fairly complicated composite operations can be performed. 

Pandas implements several of these fundamental building blocks in the pd.merge() function and the related join() method of Series and DataFrames. As we will see, these let you efficiently link data from different sources. 

Categories of Joins 
The pd.merge() function implements a number of types of joins: the one-to-one, many-to-one, and many-to-many joins. All three types of joins are accessed via an identical call to the pd.merge() interface; the type of join performed depends on the form of the input data. Here we will show simple examples of the three types of merges, and discuss detailed options further below. 

One-to-one joins 
Perhaps the simplest type of merge expression is the one-to-one join, which is in many ways very similar to the column-wise concatenation seen in “Combining Datasets: Concat and Append” on page 141. As a concrete example, consider the following two DataFrames, which contain information on several employees in a company: 
  1. In [3]: df1 = pd.DataFrame({"employee":['Bob''Jake''Lisa''Sue'], 'group':['Accounting''Engineering''Engineering''HR']})  
  2. In [4]: df2 = pd.DataFrame({"employee":['Lisa''Bob''Jake''Sue'], 'hire_date':[2004200820122014]})  
  3.   
  4. In [5]: print(df1); print(df2)  
  5.   employee        group  
  6. 0      Bob   Accounting  
  7. 1     Jake  Engineering  
  8. 2     Lisa  Engineering  
  9. 3      Sue           HR  
  10.   employee  hire_date  
  11. 0     Lisa       2004  
  12. 1      Bob       2008  
  13. 2     Jake       2012  
  14. 3      Sue       2014  
To combine this information into a single DataFrame, we can use the pd.merge() function: 
  1. In [6]: df3 = pd.merge(df1, df2)  
  2.   
  3. In [7]: df3  
  4. Out[7]:  
  5.   employee        group  hire_date  
  6. 0      Bob   Accounting       2008  
  7. 1     Jake  Engineering       2012  
  8. 2     Lisa  Engineering       2004  
  9. 3      Sue           HR       2014  
The pd.merge() function recognizes that each DataFrame has an “employee” column, and automatically joins using this column as a key. The result of the merge is a new DataFrame that combines the information from the two inputsNotice that the order of entries in each column is not necessarily maintained: in this case, the order of the “employee” column differs between df1and df2, and the pd.merge() function correctly accounts for this. Additionally, keep in mind that the merge in general discards the index, except in the special case of merges by index (see “The left_index and right_index keywords” on page 151). 

Many-to-one joins 
Many-to-one joins are joins in which one of the two key columns contains duplicate entries. For the many-to-one case, the resulting DataFrame will preserve those duplicate entries as appropriate. Consider the following example of a many-to-one join: 
  1. In [8]: df4 = pd.DataFrame({'group':['Accounting''Engineering''HR'], 'supervisor':['Carly''Guido''Steve']})  
  2. In [9]: print(df3); print(df4)  
  3.   employee        group  hire_date  
  4. 0      Bob   Accounting       2008  
  5. 1     Jake  Engineering       2012  
  6. 2     Lisa  Engineering       2004  
  7. 3      Sue           HR       2014  
  8.          group supervisor  
  9. 0   Accounting      Carly  
  10. 1  Engineering      Guido  
  11. 2           HR      Steve  
  12.   
  13. In [10]: pd.merge(df3, df4)  
  14. Out[10]:  
  15.   employee        group  hire_date supervisor  
  16. 0      Bob   Accounting       2008      Carly  
  17. 1     Jake  Engineering       2012      Guido  
  18. 2     Lisa  Engineering       2004      Guido  
  19. 3      Sue           HR       2014      Steve  
The resulting DataFrame has an additional column with the “supervisor” information, where the information is repeated in one or more locations as required by the inputs. 

Many-to-many joins 
Many-to-many joins are a bit confusing conceptually, but are nevertheless well defined. If the key column in both the left and right array contains duplicates, then the result is a many-to-many merge. This will be perhaps most clear with a concrete example. Consider the following, where we have a DataFrame showing one or more skills associated with a particular group. 

By performing a many-to-many join, we can recover the skills associated with any individual person: 
  1. In [11]: df5 = pd.DataFrame({'group':['Accounting''Accounting''Engineering''Engineering''HR''HR'], 'skills':['math''spreadsheets''coding''linux','spreadsheets''organization']})  
  2. In [12]: print(df1); print(df5); print(pd.merge(df1, df5))  
  3.   employee        group  
  4. 0      Bob   Accounting  
  5. 1     Jake  Engineering  
  6. 2     Lisa  Engineering  
  7. 3      Sue           HR  
  8.          group        skills  
  9. 0   Accounting          math  
  10. 1   Accounting  spreadsheets  
  11. 2  Engineering        coding  
  12. 3  Engineering         linux  
  13. 4           HR  spreadsheets  
  14. 5           HR  organization  
  15.   employee        group        skills  
  16. 0      Bob   Accounting          math  
  17. 1      Bob   Accounting  spreadsheets  
  18. 2     Jake  Engineering        coding  
  19. 3     Jake  Engineering         linux  
  20. 4     Lisa  Engineering        coding  
  21. 5     Lisa  Engineering         linux  
  22. 6      Sue           HR  spreadsheets  
  23. 7      Sue           HR  organization  
These three types of joins can be used with other Pandas tools to implement a wide array of functionality. But in practice, datasets are rarely as clean as the one we’re working with here. In the following section, we’ll consider some of the options provided by pd.merge() that enable you to tune how the join operations work. 

Specification of the Merge Key 
We’ve already seen the default behavior of pd.merge(): it looks for one or more matching column names between the two inputs, and uses this as the key. However, often the column names will not match so nicely, and pd.merge() provides a variety of options for handling this. 

The on keyword 
Most simply, you can explicitly specify the name of the key column using the on keyword, which takes a column name or a list of column names: 
  1. In [13]: print(df1); print(df2); print(pd.merge(df1, df2, on='employee'))  
  2.   employee        group  
  3. 0      Bob   Accounting  
  4. 1     Jake  Engineering  
  5. 2     Lisa  Engineering  
  6. 3      Sue           HR  
  7.   employee  hire_date  
  8. 0     Lisa       2004  
  9. 1      Bob       2008  
  10. 2     Jake       2012  
  11. 3      Sue       2014  
  12.   employee        group  hire_date  
  13. 0      Bob   Accounting       2008  
  14. 1     Jake  Engineering       2012  
  15. 2     Lisa  Engineering       2004  
  16. 3      Sue           HR       2014  
This option works only if both the left and right DataFrames have the specified column name. 

The left_on and right_on keywords 
At times you may wish to merge two datasets with different column names; for example, we may have a dataset in which the employee name is labeled as “name” rather than “employee”. In this case, we can use the left_on and right_on keywords to specify the two column names: 
  1. In [14]: df3 = pd.DataFrame({'name':['Bob''Jake''Lisa''Sue'], 'salary':[700008000012000090000]})  
  2. In [15]: pd.merge(df1, df3, left_on='employee', right_on='name')  
  3. Out[15]:  
  4.   employee        group  name  salary  
  5. 0      Bob   Accounting   Bob   70000  
  6. 1     Jake  Engineering  Jake   80000  
  7. 2     Lisa  Engineering  Lisa  120000  
  8. 3      Sue           HR   Sue   90000  
The result has a redundant column that we can drop if desired—for example, by using the drop() method of DataFrames: 
  1. In [16]: pd.merge(df1, df3, left_on='employee', right_on='name').drop('name', axis=1)  
  2. Out[16]:  
  3.   employee        group  salary  
  4. 0      Bob   Accounting   70000  
  5. 1     Jake  Engineering   80000  
  6. 2     Lisa  Engineering  120000  
  7. 3      Sue           HR   90000  
The left_index and right_index keywords 
Sometimes, rather than merging on a column, you would instead like to merge on an index. For example, your data might look like this: 
  1. In [17]: df1a = df1.set_index('employee')  
  2. In [18]: df2a = df2.set_index('employee')  
  3. In [19]: print(df1a); print(df2a)  
  4.                 group  
  5. employee  
  6. Bob        Accounting  
  7. Jake      Engineering  
  8. Lisa      Engineering  
  9. Sue                HR  
  10.           hire_date  
  11. employee  
  12. Lisa           2004  
  13. Bob            2008  
  14. Jake           2012  
  15. Sue            2014  
You can use the index as the key for merging by specifying the left_index and/or right_index flags in pd.merge()
  1. In [21]: pd.merge(df1a, df2a, left_index=True, right_index=True)  
  2. Out[21]:  
  3.                 group  hire_date  
  4. employee  
  5. Bob        Accounting       2008  
  6. Jake      Engineering       2012  
  7. Lisa      Engineering       2004  
  8. Sue                HR       2014  
For convenience, DataFrames implement the join() method, which performs a merge that defaults to joining on indices: 
  1. In [22]: df1a.join(df2a)  
  2. Out[22]:  
  3.                 group  hire_date  
  4. employee  
  5. Bob        Accounting       2008  
  6. Jake      Engineering       2012  
  7. Lisa      Engineering       2004  
  8. Sue                HR       2014  
If you’d like to mix indices and columns, you can combine left_index with right_on or left_on with right_index to get the desired behavior: 
  1. In [23]: print(df1a); print(df3)  
  2.                 group  
  3. employee  
  4. Bob        Accounting  
  5. Jake      Engineering  
  6. Lisa      Engineering  
  7. Sue                HR  
  8.    name  salary  
  9. 0   Bob   70000  
  10. 1  Jake   80000  
  11. 2  Lisa  120000  
  12. 3   Sue   90000  
  13.   
  14. In [24]: pd.merge(df1a, df3, left_index=True, right_on='name')  
  15. Out[24]:  
  16.          group  name  salary  
  17. 0   Accounting   Bob   70000  
  18. 1  Engineering  Jake   80000  
  19. 2  Engineering  Lisa  120000  
  20. 3           HR   Sue   90000  
All of these options also work with multiple indices and/or multiple columns; the interface for this behavior is very intuitive. For more information on this, see the “Merge, Join, and Concatenate” section of the Pandas documentation. 

Specifying Set Arithmetic for Joins 
In all the preceding examples we have glossed over one important consideration in performing a join: the type of set arithmetic used in the join. This comes up when a value appears in one key column but not the other. Consider this example: 
  1. In [25]: df6 = pd.DataFrame({'name':['Peter''Paul''Mary'], 'food':['fish''beans''bread']}, columns=['name''food'])  
  2. In [26]: df7 = pd.DataFrame({'name':['Mary''Joseph'], 'drink':['wine''beer']}, columns=['name''drink'])  
  3. In [27]: print(df6); print(df7); print(pd.merge(df6, df7))  
  4.     name   food  
  5. 0  Peter   fish  
  6. 1   Paul  beans  
  7. 2   Mary  bread  
  8.      name drink  
  9. 0    Mary  wine  
  10. 1  Joseph  beer  
  11.    name   food drink  
  12. 0  Mary  bread  wine  
Here we have merged two datasets that have only a single “name” entry in common: MaryBy default, the result contains the intersection of the two sets of inputs; this is what is known as an inner join. We can specify this explicitly using the how keyword, which defaults to 'inner': 
  1. In [28]: pd.merge(df6, df7, how='inner')  
  2. Out[28]:  
  3.    name   food drink  
  4. 0  Mary  bread  wine  
Other options for the how keyword are 'outer', 'left', and 'right'. An outer join returns a join over the union of the input columns, and fills in all missing values with NAs: 
  1. In [29]: pd.merge(df6, df7, how='outer')  
  2. Out[29]:  
  3.      name   food drink  
  4. 0   Peter   fish   NaN  
  5. 1    Paul  beans   NaN  
  6. 2    Mary  bread  wine  
  7. 3  Joseph    NaN  beer  
The left join and right join return join over the left entries and right entries, respectively. For example: 
  1. In [30]: pd.merge(df6, df7, how='left')  
  2. Out[30]:  
  3.     name   food drink  
  4. 0  Peter   fish   NaN  
  5. 1   Paul  beans   NaN  
  6. 2   Mary  bread  wine  
Overlapping Column Names: The suffixes Keyword 
Finally, you may end up in a case where your two input DataFrames have conflicting column names. Consider this example: 
  1. In [31]: df8 = pd.DataFrame({'name':['Bob''Jake''Lisa''Sue'], 'rank':[1234]})  
  2. In [32]: df9 = pd.DataFrame({'name':['Bob''Jake''Lisa''Sue'], 'rank':[3142]})  
  3. In [33]: pd.merge(df8, df9, on='name')  
  4. Out[33]:  
  5.    name  rank_x  rank_y  
  6. 0   Bob       1       3  
  7. 1  Jake       2       1  
  8. 2  Lisa       3       4  
  9. 3   Sue       4       2  
Because the output would have two conflicting column names, the merge function automatically appends a suffix _x or _y to make the output columns unique. If these defaults are inappropriate, it is possible to specify a custom suffix using the suffixes keyword: 
  1. In [34]: pd.merge(df8, df9, on='name', suffixes=['_L''_R'])  
  2. Out[34]:  
  3.    name  rank_L  rank_R  
  4. 0   Bob       1       3  
  5. 1  Jake       2       1  
  6. 2  Lisa       3       4  
  7. 3   Sue       4       2  
These suffixes work in any of the possible join patterns, and work also if there are multiple overlapping columns. For more information on these patterns, see “Aggregation and Grouping” on page 158, where we dive a bit deeper into relational algebra. Also see the “Merge, Join, and Concatenate” section of the Pandas documentation for further discussion of these topics. 

Example: US States Data 
Merge and join operations come up most often when one is combining data from different sources. Here we will consider an example of some data about US states and their populations. The data files can be found at http://github.com/jakevdp/data-USstates/: 
  1. // Following are shell commands to download the data  
  2. # curl -O https://raw.githubusercontent.com/jakevdp/data-USstates/master/state-population.csv  
  3. # curl -O https://raw.githubusercontent.com/jakevdp/data-USstates/master/state-areas.csv  
  4. # curl -O https://raw.githubusercontent.com/jakevdp/data-USstates/master/state-abbrevs.csv  
Let’s take a look at the three datasets, using the Pandas read_csv() function: 
  1. In [35]: pop = pd.read_csv('state-population.csv')  
  2. In [36]: areas = pd.read_csv('state-areas.csv')  
  3. In [37]: abbrevs = pd.read_csv('state-abbrevs.csv')  
  4. In [38]: print(pop.head()); print(areas.head()); print(abbrevs.head())  
  5.   state/region     ages  year  population  
  6. 0           AL  under18  2012   1117489.0  
  7. 1           AL    total  2012   4817528.0  
  8. 2           AL  under18  2010   1130966.0  
  9. 3           AL    total  2010   4785570.0  
  10. 4           AL  under18  2011   1125763.0  
  11.         state  area (sq. mi)  
  12. 0     Alabama          52423  
  13. 1      Alaska         656425  
  14. 2     Arizona         114006  
  15. 3    Arkansas          53182  
  16. 4  California         163707  
  17.         state abbreviation  
  18. 0     Alabama           AL  
  19. 1      Alaska           AK  
  20. 2     Arizona           AZ  
  21. 3    Arkansas           AR  
  22. 4  California           CA  
Given this information, say we want to compute a relatively straightforward result: rank US states and territories by their 2010 population density. We clearly have the data here to find this result, but we’ll have to combine the datasets to get it. 

We’ll start with a many-to-one merge that will give us the full state name within the population DataFrame. We want to merge based on the state/region column of pop, and the abbreviation column of abbrevs. We’ll use how='outer' to make sure no data is thrown away due to mismatched labels. 
  1. In [40]: merged = pd.merge(pop, abbrevs, how='outer', left_on='state/region', right_on='abbreviation')  
  2. In [41]: merged = merged.drop('abbreviation'1)  # drop duplicate info  
  3. In [42]: merged.head()  
  4. Out[42]:  
  5.   state/region     ages  year  population    state  
  6. 0           AL  under18  2012   1117489.0  Alabama  
  7. 1           AL    total  2012   4817528.0  Alabama  
  8. 2           AL  under18  2010   1130966.0  Alabama  
  9. 3           AL    total  2010   4785570.0  Alabama  
  10. 4           AL  under18  2011   1125763.0  Alabama  
Let’s double-check whether there were any mismatches here, which we can do by looking for rows with nulls: 
  1. In [44]: merged.isnull().any()  
  2. Out[44]:  
  3. state/region    False  
  4. ages            False  
  5. year            False  
  6. population       True  
  7. state            True  
  8. dtype: bool  
Some of the population info is null; let’s figure out which these are! 
  1. In [46]: merged[merged['population'].isnull()].head()  
  2. Out[46]:  
  3.      state/region     ages  year  population state  
  4. 2448           PR  under18  1990         NaN   NaN  
  5. 2449           PR    total  1990         NaN   NaN  
  6. 2450           PR    total  1991         NaN   NaN  
  7. 2451           PR  under18  1991         NaN   NaN  
  8. 2452           PR    total  1993         NaN   NaN  
It appears that all the null population values are from Puerto Rico prior to the year 2000; this is likely due to this data not being available from the original source. 

More importantly, we see also that some of the new state entries are also null, which means that there was no corresponding entry in the abbrevs key! Let’s figure out which regions lack this match: 
  1. In [47]: merged.loc[merged['state'].isnull(), 'state/region'].unique()  
  2. Out[47]: array(['PR''USA'], dtype=object)  
We can quickly infer the issue: our population data includes entries for Puerto Rico (PR) and the United States as a whole (USA), while these entries do not appear in the state abbreviation key. We can fix these quickly by filling in appropriate entries: 
  1. In [48]: merged.loc[merged['state/region'] == 'PR''state'] = 'Puerto Rico'  
  2. In [49]: merged.loc[merged['state/region'] == 'USA''state'] = 'United States'  
  3. In [50]: merged.isnull().any()  
  4. Out[50]:  
  5. state/region    False  
  6. ages            False  
  7. year            False  
  8. population       True  
  9. state           False  
  10. dtype: bool  
No more nulls in the state column: we’re all set! 

Now we can merge the result with the area data using a similar procedure. Examining our results, we will want to join on the state column in both: 
  1. In [51]: final = pd.merge(merged, areas, on='state', how='left')  
  2. In [52]: final.head()  
  3. Out[52]:  
  4.   state/region     ages  year  population    state  area (sq. mi)  
  5. 0           AL  under18  2012   1117489.0  Alabama        52423.0  
  6. 1           AL    total  2012   4817528.0  Alabama        52423.0  
  7. 2           AL  under18  2010   1130966.0  Alabama        52423.0  
  8. 3           AL    total  2010   4785570.0  Alabama        52423.0  
  9. 4           AL  under18  2011   1125763.0  Alabama        52423.0  
Again, let’s check for nulls to see if there were any mismatches: 
  1. In [53]: final.isnull().any()  
  2. Out[53]:  
  3. state/region     False  
  4. ages             False  
  5. year             False  
  6. population        True  
  7. state            False  
  8. area (sq. mi)     True  
  9. dtype: bool  
There are nulls in the area column; we can take a look to see which regions were ignored here: 
  1. In [56]: final['state'][final['area (sq. mi)'].isnull()].unique()  
  2. Out[56]: array(['United States'], dtype=object)  
We see that our areas DataFrame does not contain the area of the United States as a whole. We could insert the appropriate value (using the sum of all state areas, for instance), but in this case we’ll just drop the null values because the population density of the entire United States is not relevant to our current discussion
  1. In [57]: final.dropna(inplace=True)  
  2.   
  3. In [58]: final.head()  
  4. Out[58]:  
  5.   state/region     ages  year  population    state  area (sq. mi)  
  6. 0           AL  under18  2012   1117489.0  Alabama        52423.0  
  7. 1           AL    total  2012   4817528.0  Alabama        52423.0  
  8. 2           AL  under18  2010   1130966.0  Alabama        52423.0  
  9. 3           AL    total  2010   4785570.0  Alabama        52423.0  
  10. 4           AL  under18  2011   1125763.0  Alabama        52423.0  
Now we have all the data we need. To answer the question of interest, let’s first select the portion of the data corresponding with the year 2000, and the total population. We’ll use the query() function to do this quickly (this requires the numexpr package to be installed; see “High-Performance Pandas: eval() and query()” on page 208): 
  1. In [61]: data2010 = final.query("year == 2010 & ages == 'total'")  
  2. In [62]: data2010.head()  
  3. Out[62]:  
  4.     state/region   ages  year  population       state  area (sq. mi)  
  5. 3             AL  total  2010   4785570.0     Alabama        52423.0  
  6. 91            AK  total  2010    713868.0      Alaska       656425.0  
  7. 101           AZ  total  2010   6408790.0     Arizona       114006.0  
  8. 189           AR  total  2010   2922280.0    Arkansas        53182.0  
  9. 197           CA  total  2010  37333601.0  California       163707.0  
Now let’s compute the population density and display it in order. We’ll start by reindexing our data on the state, and then compute the result: 
  1. In [63]: data2010.set_index('state', inplace=True)  
  2. In [64]: data2010.head()  
  3. Out[64]:  
  4.            state/region   ages  year  population  area (sq. mi)  
  5. state  
  6. Alabama              AL  total  2010   4785570.0        52423.0  
  7. Alaska               AK  total  2010    713868.0       656425.0  
  8. Arizona              AZ  total  2010   6408790.0       114006.0  
  9. Arkansas             AR  total  2010   2922280.0        53182.0  
  10. California           CA  total  2010  37333601.0       163707.0  
  11.   
  12. In [65]: density = data2010['population'] / data2010['area (sq. mi)']  
  13. In [66]: density.sort_values(ascending=False, inplace=True)  
  14. In [67]: density.head()  
  15. Out[67]:  
  16. state  
  17. District of Columbia    8898.897059  
  18. Puerto Rico             1058.665149  
  19. New Jersey              1009.253268  
  20. Rhode Island             681.339159  
  21. Connecticut              645.600649  
  22. dtype: float64  
The result is a ranking of US states plus Washington, DC, and Puerto Rico in order of their 2010 population density, in residents per square mile. We can see that by far the densest region in this dataset is Washington, DC (i.e., the District of Columbia); among states, the densest is New Jersey. 

We can also check the end of the list: 
  1. In [68]: density.tail()  
  2. Out[68]:  
  3. state  
  4. South Dakota    10.583512  
  5. North Dakota     9.537565  
  6. Montana          6.736171  
  7. Wyoming          5.768079  
  8. Alaska           1.087509  
  9. dtype: float64  
We see that the least dense state, by far, is Alaska, averaging slightly over one resident per square mile. 

This type of messy data merging is a common task when one is trying to answer questions using real-world data sources. I hope that this example has given you an idea of the ways you can combine tools we’ve covered in order to gain insight from your data!

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...