2012年6月1日 星期五

[Python Std Library] Data Compress/Archive : zlib — Compression compatible with gzip


來源自 這裡
Preface :
For applications that require data compression, the functions in this module allow compression and decompression, using the zlib library. The zlib library has its own home page at http://www.zlib.net. zlib’s functions have many options and often need to be used in a particular order. This documentation doesn’t attempt to cover all of the permutations; consult the zlib manual at http://www.zlib.net/manual.html for authoritative information.

For reading and writing .gz files see the gzip module.

Supported APIs :
The available exception and functions in this module are :
- exception zlib.error
Exception raised on compression and decompression errors.

zlib.compress(string[, level])
Compresses the data in string, returning a string contained compressed data. level is an integer from 1 to 9 controlling the level of compression; 1 is fastest and produces the least compression, 9 is slowest and produces the most. The default value is 6. Raises the error exception if any error occurs.
>>> str = 'HiJohn'
>>> cstr = zlib.compress(str)
>>> cstr
'x\x9c\xf3\xc8\xf4\xca\xcf\xc8\x03\x00\x07v\x02A'
>>> str2 = zlib.decompress(cstr)
>>> str2
'HiJohn'

zlib.compressobj([level])
Returns a compression object, to be used for compressing data streams that won’t fit into memory at once. level is an integer from 1 to 9 controlling the level of compression; 1 is fastest and produces the least compression, 9 is slowest and produces the most. The default value is 6.

zlib.crc32(data[, value])
Changed in version 3.0: The return value is unsigned and in the range [0, 2**32-1] regardless of platform.
Changed in version 2.6: The return value is in the range [-2**31, 2**31-1] regardless of platform.

Computes a CRC (Cyclic Redundancy Check) checksum of data. If value is present, it is used as the starting value of the checksum; otherwise, a fixed default value is used. This allows computing a running checksum over the concatenation of several inputs.

zlib.decompress(string[, wbits[, bufsize]])
Decompresses the data in string, returning a string containing the uncompressed data. The wbits parameter controls the size of the window buffer, and is discussed further below. If bufsize is given, it is used as the initial size of the output buffer. Raises the error exception if any error occurs.

The absolute value of wbits is the base two logarithm of the size of the history buffer (the “window size”) used when compressing data. Its absolute value should be between 8 and 15 for the most recent versions of the zlib library, larger values resulting in better compression at the expense of greater memory usage. When decompressing a stream, wbits must not be smaller than the size originally used to compress the stream; using a too-small value will result in an exception. The default value is therefore the highest value, 15. When wbits is negative, the standard gzip header is suppressed.

bufsize is the initial size of the buffer used to hold decompressed data. If more space is required, the buffer size will be increased as needed, so you don’t have to get this value exactly right; tuning it will only save a few calls to malloc(). The default size is 16384.

zlib.decompressobj([wbits])
Returns a decompression object, to be used for decompressing data streams that won’t fit into memory at once. The wbits parameter controls the size of the window buffer.


Compression objects support the following methods :
Compress.compress(string)
Compress string, returning a string containing compressed data for at least part of the data in string. This data should be concatenated to the output produced by any preceding calls to the compress() method. Some input may be kept in internal buffers for later processing.
>>> str = 'HiJohn'
>>> cstr = zlib.compress(str)
>>> compressObj = zlib.compressobj() # 取出 compress object
>>> cstr2 = compressObj.compress(str)
>>> cstr # 完整的對 str compress data
'x\x9c\xf3\xc8\xf4\xca\xcf\xc8\x03\x00\x07v\x02A'
>>> cstr2
'x\x9c' # 只有部分的 compressed data. 剩下的還在 compress object 的 buffer
>>> lastCstr = compressObj.flush() # 取出剩下的 compressed data
>>> cstr2 + lastCstr # 應該等於 cstr
'x\x9c\xf3\xc8\xf4\xca\xcf\xc8\x03\x00\x07v\x02A'

Compress.flush([mode])
All pending input is processed, and a string containing the remaining compressed output is returned. mode can be selected from the constants Z_SYNC_FLUSH,Z_FULL_FLUSH, or Z_FINISH, defaulting to Z_FINISHZ_SYNC_FLUSH and Z_FULL_FLUSH allow compressing further strings of data, while Z_FINISH finishes the compressed stream and prevents compressing any more data.

Compress.copy()
New in version 2.5.
Returns a copy of the compression object. This can be used to efficiently compress a set of data that share a common initial prefix.


Decompression objects support the following methods, and two attributes :
Decompress.unused_data
A string which contains any bytes past the end of the compressed data...

Decompress.unconsumed_tail
A string that contains any data that was not consumed by the last decompress() call because it exceeded the limit for the uncompressed data buffer...

Decompress.decompress(string[, max_length])
Decompress string, returning a string containing the uncompressed data corresponding to at least part of the data in string. This data should be concatenated to the output produced by any preceding calls to the decompress() method. Some of the input data may be preserved in internal buffers for later processing.

If the optional parameter max_length is supplied then the return value will be no longer than max_length. This may mean that not all of the compressed input can be processed; and unconsumed data will be stored in the attribute unconsumed_tail. This string must be passed to a subsequent call to decompress() if decompression is to continue. If max_length is not supplied then the whole input is decompressed, and unconsumed_tail is an empty string.
>>> str = 'HiJohn'
>>> cstr = zlib.compress(str)
>>> dcompObj = zlib.decompressobj() # 取回 decompress object
>>> dcompObj.decompress(cstr, 2) # 只 decompress 2 bytes 回來.
'Hi'
>>> dcompObj.unconsumed_tail # 剩下還沒 decompress 的存放在 unconsumed_tail
'\xcf\xc8\x03\x00\x07v\x02A'
>>> dcompObj.flush() # 將剩下的 compressed data 進行 decompress
'John'

Decompress.flush([length])
All pending input is processed, and a string containing the remaining uncompressed output is returned. After calling flush(), the decompress() method cannot be called again; the only realistic action is to delete the object.

The optional parameter length sets the initial size of the output buffer.

Decompress.copy()
New in version 2.5.
Returns a copy of the decompression object. This can be used to save the state of the decompressor midway through the data stream in order to speed up random seeks into the stream at a future point.

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...