|
Codec.Compression.Zlib.Internal | Portability | portable (H98 + FFI) | Stability | provisional | Maintainer | duncan@haskell.org |
|
|
|
|
|
Description |
Pure stream based interface to lower level zlib wrapper
|
|
Synopsis |
|
|
|
|
Compression
|
|
|
Compress a data stream.
There are no expected error conditions. All input data streams are valid. It
is possible for unexpected errors to occur, such as running out of memory,
or finding the wrong version of the zlib C library, these are thrown as
exceptions.
|
|
|
The full set of parameters for compression. The defaults are
defaultCompressParams.
The compressBufferSize is the size of the first output buffer containing
the compressed data. If you know an approximate upper bound on the size of
the compressed data then setting this parameter can save memory. The default
compression output buffer size is 16k. If your extimate is wrong it does
not matter too much, the default buffer size will be used for the remaining
chunks.
| Constructors | |
|
|
|
The default set of parameters for compression. This is typically used with
the compressWith function with specific parameters overridden.
|
|
Decompression
|
|
|
Decompress a data stream.
It will throw an exception if any error is encountered in the input data. If
you need more control over error handling then use decompressWithErrors.
|
|
|
The full set of parameters for decompression. The defaults are
defaultDecompressParams.
The decompressBufferSize is the size of the first output buffer,
containing the uncompressed data. If you know an exact or approximate upper
bound on the size of the decompressed data then setting this parameter can
save memory. The default decompression output buffer size is 32k. If your
extimate is wrong it does not matter too much, the default buffer size will
be used for the remaining chunks.
One particular use case for setting the decompressBufferSize is if you
know the exact size of the decompressed data and want to produce a strict
Data.ByteString.ByteString. The compression and deccompression functions
use lazy Data.ByteString.Lazy.ByteStrings but if you set the
decompressBufferSize correctly then you can generate a lazy
Data.ByteString.Lazy.ByteString with exactly one chunk, which can be
converted to a strict Data.ByteString.ByteString in O(1) time using
Data.ByteString.concat . Data.ByteString.Lazy.toChunks.
| Constructors | |
|
|
|
The default set of parameters for decompression. This is typically used with
the compressWith function with specific parameters overridden.
|
|
The compression parameter types
|
|
|
The format used for compression or decompression. There are three
variations.
| Constructors | |
|
|
|
The gzip format uses a header with a checksum and some optional meta-data
about the compressed file. It is intended primarily for compressing
individual files but is also sometimes used for network protocols such as
HTTP. The format is described in detail in RFC #1952
http://www.ietf.org/rfc/rfc1952.txt
|
|
|
The zlib format uses a minimal header with a checksum but no other
meta-data. It is especially designed for use in network protocols. The
format is described in detail in RFC #1950
http://www.ietf.org/rfc/rfc1950.txt
|
|
|
The 'raw' format is just the compressed data stream without any
additional header, meta-data or data-integrity checksum. The format is
described in detail in RFC #1951 http://www.ietf.org/rfc/rfc1951.txt
|
|
|
This is not a format as such. It enabled zlib or gzip decoding with
automatic header detection. This only makes sense for decompression.
|
|
|
The compression level parameter controls the amount of compression. This
is a trade-off between the amount of compression and the time required to do
the compression.
| Constructors | DefaultCompression | | NoCompression | | BestSpeed | | BestCompression | | CompressionLevel Int | |
|
|
|
|
The default compression level is 6 (that is, biased towards higher
compression at expense of speed).
|
|
|
No compression, just a block copy.
|
|
|
The fastest compression method (less compression)
|
|
|
The slowest compression method (best compression).
|
|
|
A specific compression level between 0 and 9.
|
|
|
The compression method
| Constructors | |
|
|
|
'Deflate' is the only method supported in this version of zlib.
Indeed it is likely to be the only method that ever will be supported.
|
|
|
This specifies the size of the compression window. Larger values of this
parameter result in better compression at the expense of higher memory
usage.
The compression window size is the value of the the window bits raised to
the power 2. The window bits must be in the range 8..15 which corresponds
to compression window sizes of 256b to 32Kb. The default is 15 which is also
the maximum size.
The total amount of memory used depends on the window bits and the
MemoryLevel. See the MemoryLevel for the details.
| Constructors | WindowBits Int | | DefaultWindowBits | |
|
|
|
|
The default WindowBits is 15 which is also the maximum size.
|
|
|
A specific compression window size, specified in bits in the range 8..15
|
|
|
The MemoryLevel parameter specifies how much memory should be allocated
for the internal compression state. It is a tradoff between memory usage,
compression ratio and compression speed. Using more memory allows faster
compression and a better compression ratio.
The total amount of memory used for compression depends on the WindowBits
and the MemoryLevel. For decompression it depends only on the
WindowBits. The totals are given by the functions:
compressTotal windowBits memLevel = 4 * 2^windowBits + 512 * 2^memLevel
decompressTotal windowBits = 2^windowBits
For example, for compression with the default windowBits = 15 and
memLevel = 8 uses 256Kb. So for example a network server with 100
concurrent compressed streams would use 25Mb. The memory per stream can be
halved (at the cost of somewhat degraded and slower compressionby) by
reducing the windowBits and memLevel by one.
Decompression takes less memory, the default windowBits = 15 corresponds
to just 32Kb.
| Constructors | DefaultMemoryLevel | | MinMemoryLevel | | MaxMemoryLevel | | MemoryLevel Int | |
|
|
|
|
The default memory level. (Equivalent to memoryLevel 8)
|
|
|
Use minimum memory. This is slow and reduces the compression ratio.
(Equivalent to memoryLevel 1)
|
|
|
Use maximum memory for optimal compression speed.
(Equivalent to memoryLevel 9)
|
|
|
A specific level in the range 1..9
|
|
data CompressionStrategy | Source |
|
The strategy parameter is used to tune the compression algorithm.
The strategy parameter only affects the compression ratio but not the
correctness of the compressed output even if it is not set appropriately.
| Constructors | DefaultStrategy | | Filtered | | HuffmanOnly | |
|
|
|
|
Use this default compression strategy for normal data.
|
|
|
Use the filtered compression strategy for data produced by a filter (or
predictor). Filtered data consists mostly of small values with a somewhat
random distribution. In this case, the compression algorithm is tuned to
compress them better. The effect of this strategy is to force more Huffman
coding and less string matching; it is somewhat intermediate between
defaultCompressionStrategy and huffmanOnlyCompressionStrategy.
|
|
|
Use the Huffman-only compression strategy to force Huffman encoding only
(no string match).
|
|
Low-level API to get explicit error reports
|
|
|
Like decompress but returns a DecompressStream data structure that
contains an explicit representation of the error conditions that one may
encounter when decompressing.
Note that in addition to errors in the input data, it is possible for other
unexpected errors to occur, such as out of memory, or finding the wrong
version of the zlib C library, these are still thrown as exceptions (because
representing them as data would make this function impure).
|
|
|
A sequence of chunks of data produced from decompression.
The difference from a simple list is that it contains a representation of
errors as data rather than as exceptions. This allows you to handle error
conditions explicitly.
| Constructors | |
|
|
|
The possible error cases when decompressing a stream.
| Constructors | TruncatedInput | The compressed data stream ended prematurely. This may happen if the
input data stream was truncated.
| DictionaryRequired | It is possible to do zlib compression with a custom dictionary. This
allows slightly higher compression ratios for short files. However such
compressed streams require the same dictionary when decompressing. This
zlib binding does not currently support custom dictionaries. This error
is for when we encounter a compressed stream that needs a dictionary.
| DataError | If the compressed data stream is corrupted in any way then you will
get this error, for example if the input data just isn't a compressed
zlib data stream. In particular if the data checksum turns out to be
wrong then you will get all the decompressed data but this error at the
end, instead of the normal sucessful StreamEnd.
|
|
|
|
Produced by Haddock version 2.6.1 |