Returns a Pipe
that deflates (compresses) its input elements using
a java.util.zip.Deflater
with the parameters level
, nowrap
and strategy
.
Returns a Pipe
that deflates (compresses) its input elements using
a java.util.zip.Deflater
with the parameters level
, nowrap
and strategy
.
the compression level (0-9)
if true then use GZIP compatible compression
size of the internal buffer that is used by the compressor. Default size is 32 KB.
compression strategy -- see java.util.zip.Deflater
for details
Returns a pipe that incrementally decompresses input according to the GZIP format.
Returns a pipe that incrementally decompresses input according to the GZIP
format. Any errors in decompression will be sequenced as exceptions into the
output stream. The implementation of this pipe delegates directly to
GZIPInputStream
. Despite this, decompression is still handled in a streaming
and async fashion without any thread blockage. Under the surface, this is
handled by enqueueing chunks into a special type of byte array InputStream
which throws exceptions when exhausted rather than blocking. These signal
exceptions are caught by the pipe and treated as an async suspension. Thus,
there are no issues with arbitrarily-framed data and chunk boundaries. Also
note that there is almost no performance impact from these exceptions, due
to the way that the JVM handles throw/catch.
The chunk size here is actually really important. If you set it to be too
small, then there will be insufficient buffer space for GZIPInputStream
to
read the GZIP header preamble. This can result in repeated, non-progressing
async suspensions. This case is caught internally and will be raised as an
exception (NonProgressiveDecompressionException
) within the output stream.
Under normal circumstances, you shouldn't have to worry about this. Just, uh,
don't set the buffer size to something tiny. Matching the input stream largest
chunk size, or roughly 8 KB (whichever is larger) is a good rule of thumb.
The bounding size of the input buffer. This should roughly match the size of the largest chunk in the input stream. The chunk size in the output stream will be determined by double this value.
Returns a pipe that incrementally compresses input into the GZIP format
by delegating to java.util.zip.GZIPOutputStream
.
Returns a pipe that incrementally compresses input into the GZIP format
by delegating to java.util.zip.GZIPOutputStream
. Output is compatible
with the GNU utils gunzip
utility, as well as really anything else that
understands GZIP. Note, however, that the GZIP format is not "stable" in
the sense that all compressors will produce identical output given
identical input. Part of the header seeding is arbitrary and chosen by
the compression implementation. For this reason, the exact bytes produced
by this pipe will differ in insignificant ways from the exact bytes produced
by a tool like the GNU utils gzip
.
The buffer size which will be used to page data from the OutputStream back into chunks. This will be the chunk size of the output stream. You should set it to be equal to the size of the largest chunk in the input stream. Setting this to a size which is smaller than the chunks in the input stream will result in performance degradation of roughly 50-75%.
Returns a Pipe
that inflates (decompresses) its input elements using
a java.util.zip.Inflater
with the parameter nowrap
.
Returns a Pipe
that inflates (decompresses) its input elements using
a java.util.zip.Inflater
with the parameter nowrap
.
if true then support GZIP compatible decompression
size of the internal buffer that is used by the decompressor. Default size is 32 KB.
Provides utilities for compressing/decompressing byte streams.