2018年6月19日 星期二

[ Python 文章收集 ] List Comprehensions and Generator Expressions

Source From Here 
Preface 
Do you know the difference between the following syntax? 
  1. [x for x in range(5)]  
  2. (x for x in range(5))  
  3. tuple(range(5))  
Let's check it 

5 Facts About the Lists 
First off, a short review on the lists (arrays in other languages): 
* list is a type of data that can be represented as a collection of elements. Simple list looks like this - [0, 1, 2, 3, 4, 5]
* Lists take all possible types of data and combinations of data as their components:
>>> a = 12
>>> b = "this is text"
>>> my_list = [0, b, ['element', 'another element'], (1, 2, 3), a]
>>> print(my_list)
[0, 'this is text', ['element', 'another element'], (1, 2, 3), 12]

* Lists can be indexed. You can get access to any individual element or group of elements using the following syntax:
>>> a = ['red', 'green', 'blue']
>>> print(a[0])
red 

* Unlike strings, lists are mutable in Python. This means you can replace, add or remove elements.
* You can create a list using a for loop and a range() function.
>>> my_list = []
>>> for x in range(10):
... my_list.append(x * 2)
...
>>> print(my_list)
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

What is List Comprehension? 
Often seen as a part of functional programming in Python, list comprehensions allow you to create lists with a for loop with less code. Look at the implementation of the previous example using a list comprehension: 
>>> comp_list = [x * 2 for x in range(10)] 
>>> print(comp_list)
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

The above example is oversimplified to get the idea of syntax. The same result may be achieved simply using list(range(0, 19, 2)) function. However, you can use a more complex modifier in the first part of comprehension or add a condition that will filter the list. Something like this: 
>>> comp_list = [x ** 2 for x in range(7) if x % 2 == 0]
>>> print(comp_list)
[4, 16, 36]

Another available option is to use list comprehension to combine several lists and create a list of lists. At first glance, the syntax seems to be complicated. It may help to think of lists as an outer and inner sequences. It’s time to show the power of list comprehensions when you want to create a list of lists by combining two existing lists. 
>>> nums = [1, 2, 3, 4, 5]
>>> letters = ['A', 'B', 'C', 'D', 'E']
>>> nums_letters = [[n, l] for n in nums for l in letters]
>>> print(nums_letters)
[[1, 'A'], [1, 'B'], [1, 'C'], [1, 'D'], [1, 'E'], [2, 'A'], [2, 'B'], ...

Let’s try it with text or it’s correct to say string object. 
>>> iter_string = "some text"
>>> comp_list = [x for x in iter_string if x !=" "]
>>> print(comp_list)
['s', 'o', 'm', 'e', 't', 'e', 'x', 't']

The comprehensions are not limited to lists. You can create dicts and sets comprehensions as well: 
>>> dict_comp = {x:chr(65+x) for x in range(1, 11)}
>>> type(dict_comp)
 
>>> print(dict_comp)
{1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'J', 10: 'K'}

>>> set_comp = {x ** 3 for x in range(10) if x % 2 == 0}
>>> type(set_comp)
 
>>> print(set_comp)
{0, 8, 64, 512, 216}

Difference Between Iterable and Iterator 
It will be easier to understand the concept of generators if you get the idea of iterables and iterators. Iterable is a "sequence" of data, you can iterate over using a loop. The easiest visible example of Iterable can be a list of integers - [1, 2, 3, 4, 5, 6, 7]. However, it’s possible to iterate over other types of data like strings, dicts, tuples, sets, etc. 

Basically, any object that has iter() method can be used as an Iterable. You can check it using hasattr() function in the interpreter. 
>>> hasattr(str, '__iter__')
True
>>> hasattr(bool, '__iter__')
False

Iterator protocol is implemented whenever you iterate over a sequence of data. For example, when you use a for loop the following is happening on a background: 
* first iter() method is called on the object to converts it to an iterator object.
* next() method is called on the iterator object to get the next element of the sequence.
* StopIteration exception is raised when there are no elements left to call.

For example: 
>>> simple_list = [1, 2, 3]
>>> my_iterator = iter(simple_list)
>>> print(my_iterator)
 
>>> next(my_iterator)

>>> next(my_iterator)
2
>>> next(my_iterator)

>>> next(my_iterator)
Traceback (most recent call last):
File "", line 1, in
StopIteration

Generator Expressions 
In Python, generators provide a convenient way to implement the iterator protocol. Generator is an iterable created using a function with a yield statement. The main feature of generator is evaluating the elements on demand. When you call a normal function with a return statement the function is terminated whenever it encounters a return statement. In a function with a yield statement the state of the function is “saved” from the last call and can be picked up the next time you call a generator function. e.g: 
  1. def my_gen():  
  2.     for x in range(5):  
  3.         yield x  
Generator expression allows creating a generator on a fly without a yield keyword. However, it doesn’t share the whole power of generator created with a yield function. The syntax and concept is similar to list comprehensions: 
>>> gen_exp = (x ** 2 for x in range(10) if x % 2 == 0) 
>>> for x in gen_exp:
... print(x)
0
4
16
36
64

In terms of syntax, the only difference is that you use parenthesis instead of square brackets. However, the type of data returned by list comprehensions and generator expressions differs
>>> list_comp = [x ** 2 for x in range(10) if x % 2 == 0]
>>> gen_exp = (x ** 2 for x in range(10) if x % 2 == 0)
>>> print(list_comp)
[0, 4, 16, 36, 64]
>>> print(gen_exp)
at 0x7f600131c410>

The main advantage of generator over a list is that it take much less memory. We can check how much memory is taken by both types using sys.getsizeof() method. 

Note: in Python 2 using range() function can't actually reflect the advantage in term of size, as it still keeps the whole list of elements in memory. In Python 3, however, this example is viable as the range() returns a range object. 
>>> from sys import getsizeof
>>> my_comp = [x * 5 for x in range(1000)] # a list object
>>> my_gen = (x * 5 for x in range(1000)) # a generator object
>>> getsizeof(my_comp)
9024 
>>> getsizeof(my_gen)
88

Generator yields one item at a time thus it is more memory efficient compared to the list. For example, when you want to iterate over a list, python reserves memory for the whole list. Generator won’t keep the whole sequence in memory and will only “generate” the next element of the sequence on demand.

2018年6月15日 星期五

[ FP In Python ] Ch4. Higher-Order Functions

Preface 
In the last chapter we saw an iterator algebra that builds on the itertools module. In some ways, higher-order functions (often abbreviated as “HOFs”provide similar building blocks to express complex concepts by combining simpler functions into new functions. In general, a higher-order function is simply a function that takes one or more functions as arguments and/or produces a function as a result. Many interesting abstractions are available here. They allow chaining and combining higher-order functions in a manner analogous to how we can combine functions in itertools to produce new iterables. 

A few useful higher-order functions are contained in the functools module, and a few others are built-ins. It is common the think of map()filter(), and functools.reduce() as the most basic building blocks of higher-order functions, and most functional programming languages use these functions as their primitives (occasionally under other names). Almost as basic as map/filter/reduce as a building block is currying. In Python, currying is spelled as partial(), and is contained in the functools module—this is a function that will take another function, along with zero or more arguments to pre-fill, and return a function of fewer arguments that operates as the input function would when those arguments are passed to it. 

The built-in functions map() and filter() are equivalent to comprehensions— especially now that generator comprehensions are available—and most Python programmers find the comprehension versions more readable. For example, here are some (almost) equivalent pairs: 
  1. # Classic "FP-style"  
  2. transformed = map(tranformation, iterator)  
  3. # Comprehension  
  4. transformed = (transformation(x) for x in iterator)  
  5. # Classic "FP-style"  
  6. filtered = filter(predicate, iterator)  
  7. # Comprehension  
  8. filtered = (x for x in iterator if predicate(x))  
The function functools.reduce() is very general, very powerful, and very subtle to use to its full power. It takes successive items of an iterable, and combines them in some way. The most common use case for reduce() is probably covered by the built-in sum(), which is a more compact spelling of: 
  1. from functools import reduce  
  2.   
  3. total = reduce(operator.add, it, 0)  
  4. # total = sum(it)  
It may or may not be obvious that map() and filter() are also a special cases of reduce(). That is: 
>>> add5 = lambda n: n+5
>>> reduce(lambda l, x: l+[add5(x)], range(10), [])
[5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
>>> # simpler: map(add5, range(10))
>>> isOdd = lambda n: n%2
>>> reduce(lambda l, x: l+[x] if isOdd(x) else l, range(10), [])
[1, 3, 5, 7, 9]
>>> # simpler: filter(isOdd, range(10))

These reduce() expressions are awkward, but they also illustrate how powerful the function is in its generality: anything that can be computed from a sequence of successive elements can (if awkwardly) be expressed as a reduction. 

There are a few common higher-order functions that are not among the “batteries included” with Python, but that are very easy to create as utilities (and are included with many third-party collections of functional programming tools). Different libraries—and other programming languages—may use different names for the utility functions I describe, but analogous capabilities are widespread (as are the names I choose). 

Utility Higher-Order Functions 
A handy utility is compose(). This is a function that takes a sequence of functions and returns a function that represents the application of each of these argument functions to a data argument: 
  1. def compose(*funcs):  
  2. """Return a new function s.t.  
  3. compose(f,g,...)(x) == f(g(...(x)))"""  
  4.   def inner(data, funcs=funcs):  
  5.     result = data  
  6.     for f in reversed(funcs):  
  7.       result = f(result)  
  8.     return result  
  9.   return inner  
Then you can test it this way: 
>>> times2 = lambda x: x*2
>>> minus3 = lambda x: x-3
>>> mod6 = lambda x: x%6
>>> f = compose(mod6, times2, minus3)
>>> all(f(i) == ((i-3)*2)%6 for i in range(10000))
True

For these one-line math operations (times2minus3, etc.), we could have simply written the underlying math expression at least as easily; but if the composite calculations each involved branching, flow control, complex logic, etc., this would not be true. The built-in functions all() and any() are useful for asking whether a predicate holds of elements of an iterable. But it is also nice to be able to ask whether any/all of a collection of predicates hold for a particular data item in a composable way. We might implement these as: 
  1. all_pred = lambda item, *tests: all(p(item) for p in tests)  
  2. any_pred = lambda item, *tests: any(p(item) for p in tests)  
To show the use, let us make a few predicates: 
>>> is_lt100 = partial(operator.ge, 100) # less than 100?
>>> is_gt10 = partial(operator.le, 10) # greater than 10?
>>> from nums import is_prime # implemented elsewhere
>>> all_pred(71, is_lt100, is_gt10, is_prime)
True
>>> predicates = (is_lt100, is_gt10, is_prime)
>>> all_pred(107, *predicates)
False

If you are using python 2.x, you may have to implement your own is_prime(). Below is one example of implementation: 
  1. def is_prime(x):  
  2.     if x < 2:  
  3.         return False  
  4.     else:  
  5.         for n in range(2, (x+1)/2):  
  6.             if x % n == 0:  
  7.                 return False  
  8.     return True  
The library toolz has what might be a more general version of this called juxt() that creates a function that calls several functions with the same arguments and returns a tuple of results. We could use that, for example, to do: 
>>> from toolz.functoolz import juxt
>>> juxt([is_lt100, is_gt10, is_prime])(71)
(True, True, True)

>>> all(juxt([is_lt100, is_gt10, is_prime])(71))
True

>>> juxt([is_lt100, is_gt10, is_prime])(107)
(False, True, True)

The utility higher-order functions shown here are just a small selection to illustrate composability. Look at a longer text on functional programming—or, for example, read the Haskell prelude—for many other ideas on useful utility higher-order-functions. 

The operator Module 
As has been shown in a few of the examples, every operation that can be done with Python’s infix and prefix operators corresponds to a named function in the operator module. For places where you want to be able to pass a function performing the equivalent of some syntactic operation to some higher-order function, using the name from operator is faster and looks nicer than a corresponding lambda. For example: 
  1. # Compare ad hoc lambda with `operator` function  
  2. sum1 = reduce(lambda a, b: a+b, iterable, 0)  
  3. sum2 = reduce(operator.add, iterable, 0)  
  4. sum3 = sum(iterable) # The actual Pythonic way  
The functools Module 
The obvious place for Python to include higher-order functions is in the functools module, and indeed a few are in there. However, there are surprisingly few utility higher-order functions in that module. It has gained a few interesting ones over Python versions, but core developers have a resistence to going in the direction of a full functional programming language. On the other hand, as we have seen in a few example above, many of the most useful higher order functions only take a few lines (sometimes a single line) to write yourself. 

Apart from reduce(), which is discussed at the start of this chapter, the main facility in the module is partial(), which has also been mentioned. This operation is called “currying” (after Haskell Curry) in many languages. There are also some examples of using partial() discussed above. 

The remainder of the functools module is generally devoted to useful decorators, which is the topic of the next section. 

Decorators 
Although it is—by design—easy to forget it, probably the most common use of higher-order functions in Python is as decorators. A decorator is just syntax sugar that takes a function as an argument, and if it is programmed correctly, returns a new function that is in some way an enhancement of the original function (or method, or class). Just to remind readers, these two snippets of code defining some_func and other_func are equivalent: 
  1. @enhanced  
  2. def some_func(*args):  
  3.     pass  
  4.   
  5. def other_func(*args):  
  6.     pass  
  7.   
  8. other_func = enhanced(other_func)  
Used with the decorator syntax, of course, the higher-order function is necessarily used at definition time for a function. For their intended purpose, this is usually when they are best applied. But the same decorator function can always, in principle, be used elsewhere in a program, for example in a more dynamic way (e.g., mapping a decorator function across a runtime-generated collection of other functions). That would be an unusual use case, however. 

Decorators are used in many places in the standard library and in common third-party libraries. In some ways they tie in with an idea that used to be called “aspect-oriented programming.” For example, the decorator function asyncio.coroutine is used to mark a function as a coroutine. Within functools the three important decorator functions are functools.lru_cachefunctools.total_ordering, and functools.wraps. The first “memoizes” a function (i.e., it caches the arguments passed and returns stored values rather than performing new computation or I/O). The second makes it easier to write custom classes that want to use inequality operators. The last makes it easier to write new decorators. All of these are important and worthwhile purposes, but they are also more in the spirit of making the plumbing of Python programming easier in a general—almost syntactic—way rather than the composable higher-order functions this chapter focuses on. 

Decorators in general are more useful when you want to poke into the guts of a function than when you want to treat it as a pluggable component in a flow or composition of functions, often done to mark the purpose or capabilities of a particular function. This report has given only a glimpse into some techniques for programming Python in a more functional style, and only some suggestions as to the advantages one often finds in aspiring in that direction. Programs that use functional programming are usually shorter than more traditional imperative ones, but much more importantly, they are also usually both more composable and more provably correct. A large class of difficult to debug errors in program logic are avoided by writing functions without side effects, and even more errors are avoided by writing small units of functionality whose operation can be understood and tested more reliably. 

A rich literature on functional programming as a general technique —often in particular languages which are not Python—is available and well respected. Studying one of many such classic books, some published by O’Reilly (including very nice video training on functional programming in Python), can give readers further insight into the nitty-gritty of functional programming techniques. Almost everything one might do in a more purely functional language can be done with very little adjustment in Python as well. 

Supplement 
List Comprehensions and Generator Expressions 
Python3 Tutorial - Decorators

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...