更新libclamav库1.0.0版本

This commit is contained in:
2023-01-14 18:28:39 +08:00
parent b879ee0b2e
commit 45fe15f472
8531 changed files with 1222046 additions and 177272 deletions

View File

@@ -0,0 +1 @@
{"files":{"CHANGES.md":"7ebcc8710c3013c5eb8872dab64f202137b365a578dffc4bf928e1fe2bb98df3","Cargo.lock":"809d5d1476e1b004b3761361c31020b7eaca64efca5bf5b6b7643238cb7bb903","Cargo.toml":"fdcb43e90cf610502dd095274eafb84404863f7b6867dc7e31ecad63a13f6429","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"eaf40297c75da471f7cda1f3458e8d91b4b2ec866e609527a13acfa93b638652","README.md":"a87e2bc972068409cff0665241349c4eb72291b33d12ed97a09f97e9a2560655","benches/README.md":"0c60c3d497abdf6c032863aa47da41bc6bb4f5ff696d45dec0e6eb33459b14b0","benches/decoder.rs":"478bdea15bc0c73f45ade2434647a079793cacd224d97bf3aed9e3a00bca7fe2","examples/png-generate.rs":"4034c2d3b85aa6ee7a06ca9439c3d2c08f6b9c22db0faafe58fba1ac5aa3dcce","examples/pngcheck.rs":"c7fb601003ef14d00dc0be41c45e0cbc6dcc495e95711feeb825fb50cb778300","examples/show.rs":"f9809fb12726dad2d6a47779a62603ac1baacf0f0d7eec06836cb8217fabc0c8","src/chunk.rs":"eff04345e9af621ce51a9141f35657262ee1a891e724332a80ac40eec90a2c45","src/common.rs":"33308ae3c9e343d7cb529f8f12618e78368fda5483c7da1df324d2d8948fb663","src/decoder/mod.rs":"391a09037cebdd3d9a19ed007b4f1a418890ba76309b007cde23b06a18448674","src/decoder/stream.rs":"3812f9e3a6726e3809f6c27f90d6f76223ee64dd78a37f91af87347b80222df9","src/decoder/zlib.rs":"9e1c190f58b234d445f7afb3de091fc7b6cb4beaf8e317993adb44c1033e6ec1","src/encoder.rs":"4208655099b93a11740cbe91119b510138e05422900b15a3d239f271b6e00071","src/filter.rs":"8f84cd237f47617672786a3165a7962fcdc4d2f4eea7f751c6b11663baa1b011","src/lib.rs":"03fa1b4453c6640b2057b330808a288f01aea7b6e9b0f6fdbde4c0c8dcdc639a","src/srgb.rs":"da1609902064016853410633926d316b5289d4bbe1fa469b21f116c1c1b2c18e","src/text_metadata.rs":"e531277c3f1205239d21825aa3dacb8d828e3fa4c257c94af799b0c04d38a8d5","src/traits.rs":"79d357244e493f5174ca11873b0d5c443fd4a5e6e1f7c6df400a1767c5ad05b2","src/utils.rs":"482c30377d3d3a79a62fd436b1b6792ad5574b2d912c1d681981fc0e0f04b6ca"},"package":"5d708eaf860a19b19ce538740d2b4bdeeb8337fa53f7738455e706623ad5c638"}

View File

@@ -0,0 +1,148 @@
## Unreleased
## 0.17.7
* Fixed handling broken tRNS chunk.
* Updated to miniz_oxide 0.6.
## 0.17.6
* Added `Decoder::read_header_info` to query the information contained in the
PNG header.
* Switched to using the flate2 crate for encoding.
## 0.17.5
* Fixed a regression, introduced by chunk validation, that made the decoder
sensitive to the order of `gAMA`, `cHRM`, and `sRGB` chunks.
## 0.17.4
* Added `{Decoder,StreamDecoder}::set_ignore_text_chunk` to disable decoding of
ancillary text chunks during the decoding process (chunks decoded by default).
* Added duplicate chunk checks. The decoder now enforces that standard chunks
such as palette, gamma, … occur at most once as specified.
* Added `#[forbid(unsafe_code)]` again. This may come at a minor performance
cost when decoding ASCII text for now.
* Fixed a bug where decoding of large chunks (>32kB) failed to produce the
correct result, or fail the image decoding. As new chunk types are decoded
this introduced regressions relative to previous versions.
## 0.17.3
* Fixed a bug where `Writer::finish` would not drop the underlying writer. This
would fail to flush and leak memory when using a buffered file writers.
* Calling `Writer::finish` will now eagerly flush the underlying writer,
returning any error that this operation may result in.
* Errors in inflate are now diagnosed with more details.
* The color and depth combination is now checked in stream decoder.
## 0.17.2
* Added support for encoding and decoding tEXt/zTXt/iTXt chunks.
* Added `Encoder::validate_sequence` to enable validation of the written frame
sequence, that is, if the number of written images is consistent with the
animation state.
* Validation is now off by default. The basis of the new validation had been
introduced in 0.17 but this fixes some cases where this validation was too
aggressive compared to previous versions.
* Added `Writer::finish` to fully check the write of the end of an image
instead of silently ignoring potential errors in `Drop`.
* The `Writer::write_chunk` method now validates that the computed chunk length
does not overflow the limit set by PNG.
* Fix an issue where the library would panic or even abort the process when
`flush` or `write` of an underlying writer panicked, or in some other uses of
`StreamWriter`.
## 0.17.1
* Fix panic in adaptive filter method `sum_buffer`
## 0.17.0
* Increased MSRV to 1.46.0
* Rework output info usage
* Implement APNG encoding
* Improve ergonomics of encoder set_palette and set_trns methods
* Make Info struct non-exhaustive
* Make encoder a core feature
* Default Transformations to Identity
* Add Adaptive filtering method for encoding
* Fix SCREAM_CASE on ColorType variants
* Forbid unsafe code
## 0.16.7
* Added `Encoder::set_trns` to register a transparency table to be written.
## 0.16.6
* Fixed silent integer overflows in buffer size calculation, resulting in
panics from assertions and out-of-bounds accesses when actually decoding.
This improves the stability of 32-bit and 16-bit targets and make decoding
run as stable as on 64-bit.
* Reject invalid color/depth combinations. Some would lead to mismatched output
buffer size and panics during decoding.
* Add `Clone` impl for `Info` struct.
## 0.16.5
* Decoding of APNG subframes is now officially supported and specified. Note
that dispose ops and positioning in the image need to be done by the caller.
* Added encoding of indexed data.
* Switched to `miniz_oxide` for decompressing image data, with 30%-50% speedup
in common cases and up to 200% in special ones.
* Fix accepting images only with consecutive IDAT chunks, rules out data loss.
## 0.16.4
* The fdAT frames are no longer inspected when the main image is read. This
would previously be the case for non-interlaced images. This would lead to
incorrect failure and, e.g. an error of the form `"invalid filter method"`.
* Fix always validating the last IDAT-chunks checksum, was sometimes ignored.
* Prevent encoding color/bit-depth combinations forbidden by the specification.
* The fixes for APNG/fdAT enable further implementation. The _next_ release is
expected to officially support APNG.
## 0.16.3
* Fix encoding with filtering methods Up, Avg, Paeth
* Optimize decoding throughput by up to +30%
## 0.16.2
* Added method constructing an owned stream encoder.
## 0.16.1
* Addressed files bloating the packed crate
## 0.16.0
* Fix a bug compressing images with deflate
* Address use of deprecated error interfaces
## 0.15.3
* Fix panic while trying to encode empty images. Such images are no longer
accepted and error when calling `write_header` before any data has been
written. The specification does not permit empty images.
## 0.15.2
* Fix `EXPAND` transformation to leave bit depths above 8 unchanged
## 0.15.1
* Fix encoding writing invalid chunks. Images written can be corrected: see
https://github.com/image-rs/image/issues/1074 for a recovery.
* Fix a panic in bit unpacking with checked arithmetic (e.g. in debug builds)
* Added better fuzzer integration
* Update `term`, `rand` dev-dependency
* Note: The `show` example program requires a newer compiler than 1.34.2 on
some targets due to depending on `glium`. This is not considered a breaking
bug.
## 0.15
Begin of changelog

1713
clamav/libclamav_rust/.cargo/vendor/png/Cargo.lock generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,71 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2018"
name = "png"
version = "0.17.7"
authors = ["The image-rs Developers"]
include = [
"/LICENSE-MIT",
"/LICENSE-APACHE",
"/README.md",
"/CHANGES.md",
"/src/",
"/examples/",
"/benches/",
]
description = "PNG decoding and encoding library in pure Rust"
readme = "README.md"
categories = ["multimedia::images"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/image-rs/image-png.git"
[[bench]]
name = "decoder"
path = "benches/decoder.rs"
harness = false
[dependencies.bitflags]
version = "1.0"
[dependencies.crc32fast]
version = "1.2.0"
[dependencies.flate2]
version = "1.0"
[dependencies.miniz_oxide]
version = "0.6.0"
[dev-dependencies.criterion]
version = "0.3.1"
[dev-dependencies.getopts]
version = "0.2.14"
[dev-dependencies.glium]
version = "0.31"
features = ["glutin"]
default-features = false
[dev-dependencies.glob]
version = "0.3"
[dev-dependencies.rand]
version = "0.8.4"
[dev-dependencies.term]
version = "0.7"
[features]
benchmarks = []
unstable = []

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,25 @@
Copyright (c) 2015 nwin
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,39 @@
# PNG Decoder/Encoder
[![Build Status](https://github.com/image-rs/image-png/workflows/Rust%20CI/badge.svg)](https://github.com/image-rs/image-png/actions)
[![Documentation](https://docs.rs/png/badge.svg)](https://docs.rs/png)
[![Crates.io](https://img.shields.io/crates/v/png.svg)](https://crates.io/crates/png)
![Lines of Code](https://tokei.rs/b1/github/image-rs/image-png)
[![License](https://img.shields.io/crates/l/png.svg)](https://github.com/image-rs/image-png)
[![fuzzit](https://app.fuzzit.dev/badge?org_id=image-rs)](https://app.fuzzit.dev/orgs/image-rs/dashboard)
PNG decoder/encoder in pure Rust.
It contains all features required to handle the entirety of [the PngSuite by
Willem van Schack][PngSuite].
[PngSuite]: http://www.schaik.com/pngsuite2011/pngsuite.html
## pngcheck
The `pngcheck` utility is a small demonstration binary that checks and prints
metadata on every `.png` image provided via parameter. You can run it (for
example on the test directories) with
```bash
cargo run --release --example pngcheck ./tests/pngsuite/*
```
## License
Licensed under either of
* Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
additional terms or conditions.

View File

@@ -0,0 +1,6 @@
# Getting started with benchmarking
To run the benchmarks you need a nightly rust toolchain.
Then you launch it with
rustup run nightly cargo bench --features=benchmarks

View File

@@ -0,0 +1,40 @@
use std::fs;
use criterion::{criterion_group, criterion_main, Criterion, Throughput};
use png::Decoder;
fn load_all(c: &mut Criterion) {
for file in fs::read_dir("tests/benches/").unwrap() {
if let Ok(entry) = file {
match entry.path().extension() {
Some(st) if st == "png" => {}
_ => continue,
}
let data = fs::read(entry.path()).unwrap();
bench_file(c, data, entry.file_name().into_string().unwrap());
}
}
}
criterion_group!(benches, load_all);
criterion_main!(benches);
fn bench_file(c: &mut Criterion, data: Vec<u8>, name: String) {
let mut group = c.benchmark_group("decode");
group.sample_size(20);
let decoder = Decoder::new(&*data);
let mut reader = decoder.read_info().unwrap();
let mut image = vec![0; reader.output_buffer_size()];
let info = reader.next_frame(&mut image).unwrap();
group.throughput(Throughput::Bytes(info.buffer_size() as u64));
group.bench_with_input(name, &data, |b, data| {
b.iter(|| {
let decoder = Decoder::new(data.as_slice());
let mut decoder = decoder.read_info().unwrap();
decoder.next_frame(&mut image).unwrap();
})
});
}

View File

@@ -0,0 +1,56 @@
// For reading and opening files
use png;
use png::text_metadata::{ITXtChunk, ZTXtChunk};
use std::env;
use std::fs::File;
use std::io::BufWriter;
fn main() {
let path = env::args()
.nth(1)
.expect("Expected a filename to output to.");
let file = File::create(path).unwrap();
let ref mut w = BufWriter::new(file);
let mut encoder = png::Encoder::new(w, 2, 1); // Width is 2 pixels and height is 1.
encoder.set_color(png::ColorType::Rgba);
encoder.set_depth(png::BitDepth::Eight);
// Adding text chunks to the header
encoder
.add_text_chunk(
"Testing tEXt".to_string(),
"This is a tEXt chunk that will appear before the IDAT chunks.".to_string(),
)
.unwrap();
encoder
.add_ztxt_chunk(
"Testing zTXt".to_string(),
"This is a zTXt chunk that is compressed in the png file.".to_string(),
)
.unwrap();
encoder
.add_itxt_chunk(
"Testing iTXt".to_string(),
"iTXt chunks support all of UTF8. Example: हिंदी.".to_string(),
)
.unwrap();
let mut writer = encoder.write_header().unwrap();
let data = [255, 0, 0, 255, 0, 0, 0, 255]; // An array containing a RGBA sequence. First pixel is red and second pixel is black.
writer.write_image_data(&data).unwrap(); // Save
// We can add a tEXt/zTXt/iTXt at any point before the encoder is dropped from scope. These chunks will be at the end of the png file.
let tail_ztxt_chunk = ZTXtChunk::new(
"Comment".to_string(),
"A zTXt chunk after the image data.".to_string(),
);
writer.write_text_chunk(&tail_ztxt_chunk).unwrap();
// The fields of the text chunk are public, so they can be mutated before being written to the file.
let mut tail_itxt_chunk = ITXtChunk::new("Author".to_string(), "सायंतन खान".to_string());
tail_itxt_chunk.compressed = true;
tail_itxt_chunk.language_tag = "hi".to_string();
tail_itxt_chunk.translated_keyword = "लेखक".to_string();
writer.write_text_chunk(&tail_itxt_chunk).unwrap();
}

View File

@@ -0,0 +1,391 @@
#![allow(non_upper_case_globals)]
extern crate getopts;
extern crate glob;
extern crate png;
use std::env;
use std::fs::File;
use std::io;
use std::io::prelude::*;
use std::path::Path;
use getopts::{Matches, Options, ParsingStyle};
use term::{color, Attr};
fn parse_args() -> Matches {
let args: Vec<String> = env::args().collect();
let mut opts = Options::new();
opts.optflag("c", "", "colorize output (for ANSI terminals)")
.optflag("q", "", "test quietly (output only errors)")
.optflag(
"t",
"",
"print contents of tEXt/zTXt/iTXt chunks (can be used with -q)",
)
.optflag("v", "", "test verbosely (print most chunk data)")
.parsing_style(ParsingStyle::StopAtFirstFree);
if args.len() > 1 {
match opts.parse(&args[1..]) {
Ok(matches) => return matches,
Err(err) => println!("{}", err),
}
}
println!(
"{}",
opts.usage(&format!("Usage: pngcheck [-cpt] [file ...]"))
);
std::process::exit(0);
}
#[derive(Clone, Copy)]
struct Config {
quiet: bool,
verbose: bool,
color: bool,
text: bool,
}
fn display_interlaced(i: bool) -> &'static str {
if i {
"interlaced"
} else {
"non-interlaced"
}
}
fn display_image_type(bits: u8, color: png::ColorType) -> String {
use png::ColorType::*;
format!(
"{}-bit {}",
bits,
match color {
Grayscale => "grayscale",
Rgb => "RGB",
Indexed => "palette",
GrayscaleAlpha => "grayscale+alpha",
Rgba => "RGB+alpha",
}
)
}
// channels after expansion of tRNS
fn final_channels(c: png::ColorType, trns: bool) -> u8 {
use png::ColorType::*;
match c {
Grayscale => 1 + if trns { 1 } else { 0 },
Rgb => 3,
Indexed => 3 + if trns { 1 } else { 0 },
GrayscaleAlpha => 2,
Rgba => 4,
}
}
fn check_image<P: AsRef<Path>>(c: Config, fname: P) -> io::Result<()> {
// TODO improve performance by resusing allocations from decoder
use png::Decoded::*;
let mut t = term::stdout().ok_or(io::Error::new(
io::ErrorKind::Other,
"could not open terminal",
))?;
let data = &mut vec![0; 10 * 1024][..];
let mut reader = io::BufReader::new(File::open(&fname)?);
let fname = fname.as_ref().to_string_lossy();
let n = reader.read(data)?;
let mut buf = &data[..n];
let mut pos = 0;
let mut decoder = png::StreamingDecoder::new();
// Image data
let mut width = 0;
let mut height = 0;
let mut color = png::ColorType::Grayscale;
let mut bits = 0;
let mut trns = false;
let mut interlaced = false;
let mut compressed_size = 0;
let mut n_chunks = 0;
let mut have_idat = false;
macro_rules! c_ratio(
// TODO add palette entries to compressed_size
() => ({
compressed_size as f32/(
height as u64 *
(width as u64 * final_channels(color, trns) as u64 * bits as u64 + 7)>>3
) as f32
});
);
let display_error = |err| -> Result<_, io::Error> {
let mut t = term::stdout().ok_or(io::Error::new(
io::ErrorKind::Other,
"could not open terminal",
))?;
if c.verbose {
if c.color {
print!(": ");
t.fg(color::RED)?;
writeln!(t, "{}", err)?;
t.attr(Attr::Bold)?;
write!(t, "ERRORS DETECTED")?;
t.reset()?;
} else {
println!(": {}", err);
print!("ERRORS DETECTED")
}
println!(" in {}", fname);
} else {
if !c.quiet {
if c.color {
t.fg(color::RED)?;
t.attr(Attr::Bold)?;
write!(t, "ERROR")?;
t.reset()?;
write!(t, ": ")?;
t.fg(color::YELLOW)?;
writeln!(t, "{}", fname)?;
t.reset()?;
} else {
println!("ERROR: {}", fname)
}
}
print!("{}: ", fname);
if c.color {
t.fg(color::RED)?;
writeln!(t, "{}", err)?;
t.reset()?;
} else {
println!("{}", err);
}
}
Ok(())
};
if c.verbose {
print!("File: ");
if c.color {
t.attr(Attr::Bold)?;
write!(t, "{}", fname)?;
t.reset()?;
} else {
print!("{}", fname);
}
print!(" ({}) bytes", data.len())
}
loop {
if buf.len() == 0 {
// circumvent borrow checker
assert!(!data.is_empty());
let n = reader.read(data)?;
// EOF
if n == 0 {
println!("ERROR: premature end of file {}", fname);
break;
}
buf = &data[..n];
}
match decoder.update(buf, &mut Vec::new()) {
Ok((_, ImageEnd)) => {
if !have_idat {
// This isn't beautiful. But it works.
display_error(png::DecodingError::IoError(io::Error::new(
io::ErrorKind::InvalidData,
"IDAT chunk missing",
)))?;
break;
}
if !c.verbose && !c.quiet {
if c.color {
t.fg(color::GREEN)?;
t.attr(Attr::Bold)?;
write!(t, "OK")?;
t.reset()?;
write!(t, ": ")?;
t.fg(color::YELLOW)?;
write!(t, "{}", fname)?;
t.reset()?;
} else {
print!("OK: {}", fname)
}
println!(
" ({}x{}, {}{}, {}, {:.1}%)",
width,
height,
display_image_type(bits, color),
(if trns { "+trns" } else { "" }),
display_interlaced(interlaced),
100.0 * (1.0 - c_ratio!())
)
} else if !c.quiet {
println!("");
if c.color {
t.fg(color::GREEN)?;
t.attr(Attr::Bold)?;
write!(t, "No errors detected ")?;
t.reset()?;
} else {
print!("No errors detected ");
}
println!(
"in {} ({} chunks, {:.1}% compression)",
fname,
n_chunks,
100.0 * (1.0 - c_ratio!()),
)
}
break;
}
Ok((n, res)) => {
buf = &buf[n..];
pos += n;
match res {
Header(w, h, b, c, i) => {
width = w;
height = h;
bits = b as u8;
color = c;
interlaced = i;
}
ChunkBegin(len, type_str) => {
use png::chunk;
n_chunks += 1;
if c.verbose {
let chunk = type_str;
println!("");
print!(" chunk ");
if c.color {
t.fg(color::YELLOW)?;
write!(t, "{:?}", chunk)?;
t.reset()?;
} else {
print!("{:?}", chunk)
}
print!(
" at offset {:#07x}, length {}",
pos - 4, // substract chunk name length
len
)
}
match type_str {
chunk::IDAT => {
have_idat = true;
compressed_size += len
}
chunk::tRNS => {
trns = true;
}
_ => (),
}
}
ImageData => {
//println!("got {} bytes of image data", data.len())
}
ChunkComplete(_, type_str) if c.verbose => {
use png::chunk::*;
match type_str {
IHDR => {
println!("");
print!(
" {} x {} image, {}{}, {}",
width,
height,
display_image_type(bits, color),
(if trns { "+trns" } else { "" }),
display_interlaced(interlaced),
);
}
_ => (),
}
}
AnimationControl(actl) => {
println!("");
print!(" {} frames, {} plays", actl.num_frames, actl.num_plays,);
}
FrameControl(fctl) => {
println!("");
println!(
" sequence #{}, {} x {} pixels @ ({}, {})",
fctl.sequence_number,
fctl.width,
fctl.height,
fctl.x_offset,
fctl.y_offset,
/*fctl.delay_num,
fctl.delay_den,
fctl.dispose_op,
fctl.blend_op,*/
);
print!(
" {}/{} s delay, dispose: {}, blend: {}",
fctl.delay_num,
if fctl.delay_den == 0 {
100
} else {
fctl.delay_den
},
fctl.dispose_op,
fctl.blend_op,
);
}
_ => (),
}
//println!("{} {:?}", n, res)
}
Err(err) => {
let _ = display_error(err);
break;
}
}
}
if c.text {
println!("Parsed tEXt chunks:");
for text_chunk in &decoder.info().unwrap().uncompressed_latin1_text {
println!("{:#?}", text_chunk);
}
println!("Parsed zTXt chunks:");
for text_chunk in &decoder.info().unwrap().compressed_latin1_text {
let mut cloned_text_chunk = text_chunk.clone();
cloned_text_chunk.decompress_text()?;
println!("{:#?}", cloned_text_chunk);
}
println!("Parsed iTXt chunks:");
for text_chunk in &decoder.info().unwrap().utf8_text {
let mut cloned_text_chunk = text_chunk.clone();
cloned_text_chunk.decompress_text()?;
println!("{:#?}", cloned_text_chunk);
}
}
Ok(())
}
fn main() {
let m = parse_args();
let config = Config {
quiet: m.opt_present("q"),
verbose: m.opt_present("v"),
color: m.opt_present("c"),
text: m.opt_present("t"),
};
for file in m.free {
let result = if file.contains("*") {
glob::glob(&file)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err))
.and_then(|mut glob| {
glob.try_for_each(|entry| {
entry
.map_err(|err| io::Error::new(io::ErrorKind::Other, err))
.and_then(|file| check_image(config, file))
})
})
} else {
check_image(config, &file)
};
result.unwrap_or_else(|err| {
println!("{}: {}", file, err);
std::process::exit(1)
});
}
}

View File

@@ -0,0 +1,201 @@
use glium::{
backend::glutin::Display,
glutin::{
self, dpi,
event::{ElementState, Event, KeyboardInput, VirtualKeyCode, WindowEvent},
event_loop::ControlFlow,
},
texture::{ClientFormat, RawImage2d},
BlitTarget, Rect, Surface,
};
use std::{borrow::Cow, env, fs::File, io, path};
/// Load the image using `png`
fn load_image(path: &path::PathBuf) -> io::Result<RawImage2d<'static, u8>> {
use png::ColorType::*;
let mut decoder = png::Decoder::new(File::open(path)?);
decoder.set_transformations(png::Transformations::normalize_to_color8());
let mut reader = decoder.read_info()?;
let mut img_data = vec![0; reader.output_buffer_size()];
let info = reader.next_frame(&mut img_data)?;
let (data, format) = match info.color_type {
Rgb => (img_data, ClientFormat::U8U8U8),
Rgba => (img_data, ClientFormat::U8U8U8U8),
Grayscale => (
{
let mut vec = Vec::with_capacity(img_data.len() * 3);
for g in img_data {
vec.extend([g, g, g].iter().cloned())
}
vec
},
ClientFormat::U8U8U8,
),
GrayscaleAlpha => (
{
let mut vec = Vec::with_capacity(img_data.len() * 3);
for ga in img_data.chunks(2) {
let g = ga[0];
let a = ga[1];
vec.extend([g, g, g, a].iter().cloned())
}
vec
},
ClientFormat::U8U8U8U8,
),
_ => unreachable!("uncovered color type"),
};
Ok(RawImage2d {
data: Cow::Owned(data),
width: info.width,
height: info.height,
format: format,
})
}
fn main_loop(files: Vec<path::PathBuf>) -> io::Result<()> {
let mut files = files.into_iter();
let image = load_image(&files.next().unwrap())?;
let event_loop = glutin::event_loop::EventLoop::new();
let window_builder = glutin::window::WindowBuilder::new().with_title("Show Example");
let context_builder = glutin::ContextBuilder::new().with_vsync(true);
let display = glium::Display::new(window_builder, context_builder, &event_loop)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err))?;
resize_window(&display, &image);
let mut texture = glium::Texture2d::new(&display, image).unwrap();
draw(&display, &texture);
event_loop.run(move |event, _, control_flow| match event {
Event::WindowEvent {
event: WindowEvent::CloseRequested,
..
} => exit(control_flow),
Event::WindowEvent {
event:
WindowEvent::KeyboardInput {
input:
KeyboardInput {
state: ElementState::Pressed,
virtual_keycode: code,
..
},
..
},
..
} => match code {
Some(VirtualKeyCode::Escape) => exit(control_flow),
Some(VirtualKeyCode::Right) => match &files.next() {
Some(path) => {
match load_image(path) {
Ok(image) => {
resize_window(&display, &image);
texture = glium::Texture2d::new(&display, image).unwrap();
draw(&display, &texture);
}
Err(err) => {
println!("Error: {}", err);
exit(control_flow);
}
};
}
None => exit(control_flow),
},
_ => {}
},
Event::RedrawRequested(_) => draw(&display, &texture),
_ => {}
});
}
fn draw(display: &glium::Display, texture: &glium::Texture2d) {
let frame = display.draw();
fill_v_flipped(
&texture.as_surface(),
&frame,
glium::uniforms::MagnifySamplerFilter::Linear,
);
frame.finish().unwrap();
}
fn exit(control_flow: &mut ControlFlow) {
*control_flow = ControlFlow::Exit;
}
fn fill_v_flipped<S1, S2>(src: &S1, target: &S2, filter: glium::uniforms::MagnifySamplerFilter)
where
S1: Surface,
S2: Surface,
{
let src_dim = src.get_dimensions();
let src_rect = Rect {
left: 0,
bottom: 0,
width: src_dim.0 as u32,
height: src_dim.1 as u32,
};
let target_dim = target.get_dimensions();
let target_rect = BlitTarget {
left: 0,
bottom: target_dim.1,
width: target_dim.0 as i32,
height: -(target_dim.1 as i32),
};
src.blit_color(&src_rect, target, &target_rect, filter);
}
fn resize_window(display: &Display, image: &RawImage2d<'static, u8>) {
let mut width = image.width;
let mut height = image.height;
if width < 50 && height < 50 {
width *= 10;
height *= 10;
} else if width < 5 && height < 5 {
width *= 10;
height *= 10;
}
display
.gl_window()
.window()
.set_inner_size(dpi::LogicalSize::new(f64::from(width), f64::from(height)));
}
fn main() {
let args: Vec<String> = env::args().collect();
if args.len() < 2 {
println!("Usage: show files [...]");
} else {
let mut files = vec![];
for file in args.iter().skip(1) {
match if file.contains("*") {
(|| -> io::Result<_> {
for entry in glob::glob(&file)
.map_err(|err| io::Error::new(io::ErrorKind::Other, err.msg))?
{
files.push(
entry
.map_err(|_| io::Error::new(io::ErrorKind::Other, "glob error"))?,
)
}
Ok(())
})()
} else {
files.push(path::PathBuf::from(file));
Ok(())
} {
Ok(_) => (),
Err(err) => {
println!("{}: {}", file, err);
break;
}
}
}
// "tests/pngsuite/pngsuite.png"
match main_loop(files) {
Ok(_) => (),
Err(err) => println!("Error: {}", err),
}
}
}

View File

@@ -0,0 +1,98 @@
//! Chunk types and functions
#![allow(dead_code)]
#![allow(non_upper_case_globals)]
use core::fmt;
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
pub struct ChunkType(pub [u8; 4]);
// -- Critical chunks --
/// Image header
pub const IHDR: ChunkType = ChunkType(*b"IHDR");
/// Palette
pub const PLTE: ChunkType = ChunkType(*b"PLTE");
/// Image data
pub const IDAT: ChunkType = ChunkType(*b"IDAT");
/// Image trailer
pub const IEND: ChunkType = ChunkType(*b"IEND");
// -- Ancillary chunks --
/// Transparency
pub const tRNS: ChunkType = ChunkType(*b"tRNS");
/// Background colour
pub const bKGD: ChunkType = ChunkType(*b"bKGD");
/// Image last-modification time
pub const tIME: ChunkType = ChunkType(*b"tIME");
/// Physical pixel dimensions
pub const pHYs: ChunkType = ChunkType(*b"pHYs");
/// Source system's pixel chromaticities
pub const cHRM: ChunkType = ChunkType(*b"cHRM");
/// Source system's gamma value
pub const gAMA: ChunkType = ChunkType(*b"gAMA");
/// sRGB color space chunk
pub const sRGB: ChunkType = ChunkType(*b"sRGB");
/// ICC profile chunk
pub const iCCP: ChunkType = ChunkType(*b"iCCP");
/// Latin-1 uncompressed textual data
pub const tEXt: ChunkType = ChunkType(*b"tEXt");
/// Latin-1 compressed textual data
pub const zTXt: ChunkType = ChunkType(*b"zTXt");
/// UTF-8 textual data
pub const iTXt: ChunkType = ChunkType(*b"iTXt");
// -- Extension chunks --
/// Animation control
pub const acTL: ChunkType = ChunkType(*b"acTL");
/// Frame control
pub const fcTL: ChunkType = ChunkType(*b"fcTL");
/// Frame data
pub const fdAT: ChunkType = ChunkType(*b"fdAT");
// -- Chunk type determination --
/// Returns true if the chunk is critical.
pub fn is_critical(ChunkType(type_): ChunkType) -> bool {
type_[0] & 32 == 0
}
/// Returns true if the chunk is private.
pub fn is_private(ChunkType(type_): ChunkType) -> bool {
type_[1] & 32 != 0
}
/// Checks whether the reserved bit of the chunk name is set.
/// If it is set the chunk name is invalid.
pub fn reserved_set(ChunkType(type_): ChunkType) -> bool {
type_[2] & 32 != 0
}
/// Returns true if the chunk is safe to copy if unknown.
pub fn safe_to_copy(ChunkType(type_): ChunkType) -> bool {
type_[3] & 32 != 0
}
impl fmt::Debug for ChunkType {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
struct DebugType([u8; 4]);
impl fmt::Debug for DebugType {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
for &c in &self.0[..] {
write!(f, "{}", char::from(c).escape_debug())?;
}
Ok(())
}
}
f.debug_struct("ChunkType")
.field("type", &DebugType(self.0))
.field("critical", &is_critical(*self))
.field("private", &is_private(*self))
.field("reserved", &reserved_set(*self))
.field("safecopy", &safe_to_copy(*self))
.finish()
}
}

View File

@@ -0,0 +1,797 @@
//! Common types shared between the encoder and decoder
use crate::text_metadata::{EncodableTextChunk, ITXtChunk, TEXtChunk, ZTXtChunk};
use crate::{chunk, encoder};
use io::Write;
use std::{borrow::Cow, convert::TryFrom, fmt, io};
/// Describes how a pixel is encoded.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
pub enum ColorType {
/// 1 grayscale sample.
Grayscale = 0,
/// 1 red sample, 1 green sample, 1 blue sample.
Rgb = 2,
/// 1 sample for the palette index.
Indexed = 3,
/// 1 grayscale sample, then 1 alpha sample.
GrayscaleAlpha = 4,
/// 1 red sample, 1 green sample, 1 blue sample, and finally, 1 alpha sample.
Rgba = 6,
}
impl ColorType {
/// Returns the number of samples used per pixel encoded in this way.
pub fn samples(self) -> usize {
self.samples_u8().into()
}
pub(crate) fn samples_u8(self) -> u8 {
use self::ColorType::*;
match self {
Grayscale | Indexed => 1,
Rgb => 3,
GrayscaleAlpha => 2,
Rgba => 4,
}
}
/// u8 -> Self. Temporary solution until Rust provides a canonical one.
pub fn from_u8(n: u8) -> Option<ColorType> {
match n {
0 => Some(ColorType::Grayscale),
2 => Some(ColorType::Rgb),
3 => Some(ColorType::Indexed),
4 => Some(ColorType::GrayscaleAlpha),
6 => Some(ColorType::Rgba),
_ => None,
}
}
pub(crate) fn checked_raw_row_length(self, depth: BitDepth, width: u32) -> Option<usize> {
// No overflow can occur in 64 bits, we multiply 32-bit with 5 more bits.
let bits = u64::from(width) * u64::from(self.samples_u8()) * u64::from(depth.into_u8());
TryFrom::try_from(1 + (bits + 7) / 8).ok()
}
pub(crate) fn raw_row_length_from_width(self, depth: BitDepth, width: u32) -> usize {
let samples = width as usize * self.samples();
1 + match depth {
BitDepth::Sixteen => samples * 2,
BitDepth::Eight => samples,
subbyte => {
let samples_per_byte = 8 / subbyte as usize;
let whole = samples / samples_per_byte;
let fract = usize::from(samples % samples_per_byte > 0);
whole + fract
}
}
}
pub(crate) fn is_combination_invalid(self, bit_depth: BitDepth) -> bool {
// Section 11.2.2 of the PNG standard disallows several combinations
// of bit depth and color type
((bit_depth == BitDepth::One || bit_depth == BitDepth::Two || bit_depth == BitDepth::Four)
&& (self == ColorType::Rgb
|| self == ColorType::GrayscaleAlpha
|| self == ColorType::Rgba))
|| (bit_depth == BitDepth::Sixteen && self == ColorType::Indexed)
}
}
/// Bit depth of the PNG file.
/// Specifies the number of bits per sample.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
pub enum BitDepth {
One = 1,
Two = 2,
Four = 4,
Eight = 8,
Sixteen = 16,
}
/// Internal count of bytes per pixel.
/// This is used for filtering which never uses sub-byte units. This essentially reduces the number
/// of possible byte chunk lengths to a very small set of values appropriate to be defined as an
/// enum.
#[derive(Debug, Clone, Copy)]
#[repr(u8)]
pub(crate) enum BytesPerPixel {
One = 1,
Two = 2,
Three = 3,
Four = 4,
Six = 6,
Eight = 8,
}
impl BitDepth {
/// u8 -> Self. Temporary solution until Rust provides a canonical one.
pub fn from_u8(n: u8) -> Option<BitDepth> {
match n {
1 => Some(BitDepth::One),
2 => Some(BitDepth::Two),
4 => Some(BitDepth::Four),
8 => Some(BitDepth::Eight),
16 => Some(BitDepth::Sixteen),
_ => None,
}
}
pub(crate) fn into_u8(self) -> u8 {
self as u8
}
}
/// Pixel dimensions information
#[derive(Clone, Copy, Debug)]
pub struct PixelDimensions {
/// Pixels per unit, X axis
pub xppu: u32,
/// Pixels per unit, Y axis
pub yppu: u32,
/// Either *Meter* or *Unspecified*
pub unit: Unit,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
/// Physical unit of the pixel dimensions
pub enum Unit {
Unspecified = 0,
Meter = 1,
}
impl Unit {
/// u8 -> Self. Temporary solution until Rust provides a canonical one.
pub fn from_u8(n: u8) -> Option<Unit> {
match n {
0 => Some(Unit::Unspecified),
1 => Some(Unit::Meter),
_ => None,
}
}
}
/// How to reset buffer of an animated png (APNG) at the end of a frame.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
pub enum DisposeOp {
/// Leave the buffer unchanged.
None = 0,
/// Clear buffer with the background color.
Background = 1,
/// Reset the buffer to the state before the current frame.
Previous = 2,
}
impl DisposeOp {
/// u8 -> Self. Using enum_primitive or transmute is probably the right thing but this will do for now.
pub fn from_u8(n: u8) -> Option<DisposeOp> {
match n {
0 => Some(DisposeOp::None),
1 => Some(DisposeOp::Background),
2 => Some(DisposeOp::Previous),
_ => None,
}
}
}
impl fmt::Display for DisposeOp {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let name = match *self {
DisposeOp::None => "DISPOSE_OP_NONE",
DisposeOp::Background => "DISPOSE_OP_BACKGROUND",
DisposeOp::Previous => "DISPOSE_OP_PREVIOUS",
};
write!(f, "{}", name)
}
}
/// How pixels are written into the buffer.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
pub enum BlendOp {
/// Pixels overwrite the value at their position.
Source = 0,
/// The new pixels are blended into the current state based on alpha.
Over = 1,
}
impl BlendOp {
/// u8 -> Self. Using enum_primitive or transmute is probably the right thing but this will do for now.
pub fn from_u8(n: u8) -> Option<BlendOp> {
match n {
0 => Some(BlendOp::Source),
1 => Some(BlendOp::Over),
_ => None,
}
}
}
impl fmt::Display for BlendOp {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let name = match *self {
BlendOp::Source => "BLEND_OP_SOURCE",
BlendOp::Over => "BLEND_OP_OVER",
};
write!(f, "{}", name)
}
}
/// Frame control information
#[derive(Clone, Copy, Debug)]
pub struct FrameControl {
/// Sequence number of the animation chunk, starting from 0
pub sequence_number: u32,
/// Width of the following frame
pub width: u32,
/// Height of the following frame
pub height: u32,
/// X position at which to render the following frame
pub x_offset: u32,
/// Y position at which to render the following frame
pub y_offset: u32,
/// Frame delay fraction numerator
pub delay_num: u16,
/// Frame delay fraction denominator
pub delay_den: u16,
/// Type of frame area disposal to be done after rendering this frame
pub dispose_op: DisposeOp,
/// Type of frame area rendering for this frame
pub blend_op: BlendOp,
}
impl Default for FrameControl {
fn default() -> FrameControl {
FrameControl {
sequence_number: 0,
width: 0,
height: 0,
x_offset: 0,
y_offset: 0,
delay_num: 1,
delay_den: 30,
dispose_op: DisposeOp::None,
blend_op: BlendOp::Source,
}
}
}
impl FrameControl {
pub fn set_seq_num(&mut self, s: u32) {
self.sequence_number = s;
}
pub fn inc_seq_num(&mut self, i: u32) {
self.sequence_number += i;
}
pub fn encode<W: Write>(self, w: &mut W) -> encoder::Result<()> {
let mut data = [0u8; 26];
data[..4].copy_from_slice(&self.sequence_number.to_be_bytes());
data[4..8].copy_from_slice(&self.width.to_be_bytes());
data[8..12].copy_from_slice(&self.height.to_be_bytes());
data[12..16].copy_from_slice(&self.x_offset.to_be_bytes());
data[16..20].copy_from_slice(&self.y_offset.to_be_bytes());
data[20..22].copy_from_slice(&self.delay_num.to_be_bytes());
data[22..24].copy_from_slice(&self.delay_den.to_be_bytes());
data[24] = self.dispose_op as u8;
data[25] = self.blend_op as u8;
encoder::write_chunk(w, chunk::fcTL, &data)
}
}
/// Animation control information
#[derive(Clone, Copy, Debug)]
pub struct AnimationControl {
/// Number of frames
pub num_frames: u32,
/// Number of times to loop this APNG. 0 indicates infinite looping.
pub num_plays: u32,
}
impl AnimationControl {
pub fn encode<W: Write>(self, w: &mut W) -> encoder::Result<()> {
let mut data = [0; 8];
data[..4].copy_from_slice(&self.num_frames.to_be_bytes());
data[4..].copy_from_slice(&self.num_plays.to_be_bytes());
encoder::write_chunk(w, chunk::acTL, &data)
}
}
/// The type and strength of applied compression.
#[derive(Debug, Clone, Copy)]
pub enum Compression {
/// Default level
Default,
/// Fast minimal compression
Fast,
/// Higher compression level
///
/// Best in this context isn't actually the highest possible level
/// the encoder can do, but is meant to emulate the `Best` setting in the `Flate2`
/// library.
Best,
#[deprecated(
since = "0.17.6",
note = "use one of the other compression levels instead, such as 'fast'"
)]
Huffman,
#[deprecated(
since = "0.17.6",
note = "use one of the other compression levels instead, such as 'fast'"
)]
Rle,
}
impl Default for Compression {
fn default() -> Self {
Self::Default
}
}
/// An unsigned integer scaled version of a floating point value,
/// equivalent to an integer quotient with fixed denominator (100_000)).
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct ScaledFloat(u32);
impl ScaledFloat {
const SCALING: f32 = 100_000.0;
/// Gets whether the value is within the clamped range of this type.
pub fn in_range(value: f32) -> bool {
value >= 0.0 && (value * Self::SCALING).floor() <= std::u32::MAX as f32
}
/// Gets whether the value can be exactly converted in round-trip.
#[allow(clippy::float_cmp)] // Stupid tool, the exact float compare is _the entire point_.
pub fn exact(value: f32) -> bool {
let there = Self::forward(value);
let back = Self::reverse(there);
value == back
}
fn forward(value: f32) -> u32 {
(value.max(0.0) * Self::SCALING).floor() as u32
}
fn reverse(encoded: u32) -> f32 {
encoded as f32 / Self::SCALING
}
/// Slightly inaccurate scaling and quantization.
/// Clamps the value into the representable range if it is negative or too large.
pub fn new(value: f32) -> Self {
Self {
0: Self::forward(value),
}
}
/// Fully accurate construction from a value scaled as per specification.
pub fn from_scaled(val: u32) -> Self {
Self { 0: val }
}
/// Get the accurate encoded value.
pub fn into_scaled(self) -> u32 {
self.0
}
/// Get the unscaled value as a floating point.
pub fn into_value(self) -> f32 {
Self::reverse(self.0) as f32
}
pub(crate) fn encode_gama<W: Write>(self, w: &mut W) -> encoder::Result<()> {
encoder::write_chunk(w, chunk::gAMA, &self.into_scaled().to_be_bytes())
}
}
/// Chromaticities of the color space primaries
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct SourceChromaticities {
pub white: (ScaledFloat, ScaledFloat),
pub red: (ScaledFloat, ScaledFloat),
pub green: (ScaledFloat, ScaledFloat),
pub blue: (ScaledFloat, ScaledFloat),
}
impl SourceChromaticities {
pub fn new(white: (f32, f32), red: (f32, f32), green: (f32, f32), blue: (f32, f32)) -> Self {
SourceChromaticities {
white: (ScaledFloat::new(white.0), ScaledFloat::new(white.1)),
red: (ScaledFloat::new(red.0), ScaledFloat::new(red.1)),
green: (ScaledFloat::new(green.0), ScaledFloat::new(green.1)),
blue: (ScaledFloat::new(blue.0), ScaledFloat::new(blue.1)),
}
}
#[rustfmt::skip]
pub fn to_be_bytes(self) -> [u8; 32] {
let white_x = self.white.0.into_scaled().to_be_bytes();
let white_y = self.white.1.into_scaled().to_be_bytes();
let red_x = self.red.0.into_scaled().to_be_bytes();
let red_y = self.red.1.into_scaled().to_be_bytes();
let green_x = self.green.0.into_scaled().to_be_bytes();
let green_y = self.green.1.into_scaled().to_be_bytes();
let blue_x = self.blue.0.into_scaled().to_be_bytes();
let blue_y = self.blue.1.into_scaled().to_be_bytes();
[
white_x[0], white_x[1], white_x[2], white_x[3],
white_y[0], white_y[1], white_y[2], white_y[3],
red_x[0], red_x[1], red_x[2], red_x[3],
red_y[0], red_y[1], red_y[2], red_y[3],
green_x[0], green_x[1], green_x[2], green_x[3],
green_y[0], green_y[1], green_y[2], green_y[3],
blue_x[0], blue_x[1], blue_x[2], blue_x[3],
blue_y[0], blue_y[1], blue_y[2], blue_y[3],
]
}
pub fn encode<W: Write>(self, w: &mut W) -> encoder::Result<()> {
encoder::write_chunk(w, chunk::cHRM, &self.to_be_bytes())
}
}
/// The rendering intent for an sRGB image.
///
/// Presence of this data also indicates that the image conforms to the sRGB color space.
#[repr(u8)]
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum SrgbRenderingIntent {
/// For images preferring good adaptation to the output device gamut at the expense of colorimetric accuracy, such as photographs.
Perceptual = 0,
/// For images requiring colour appearance matching (relative to the output device white point), such as logos.
RelativeColorimetric = 1,
/// For images preferring preservation of saturation at the expense of hue and lightness, such as charts and graphs.
Saturation = 2,
/// For images requiring preservation of absolute colorimetry, such as previews of images destined for a different output device (proofs).
AbsoluteColorimetric = 3,
}
impl SrgbRenderingIntent {
pub(crate) fn into_raw(self) -> u8 {
self as u8
}
pub(crate) fn from_raw(raw: u8) -> Option<Self> {
match raw {
0 => Some(SrgbRenderingIntent::Perceptual),
1 => Some(SrgbRenderingIntent::RelativeColorimetric),
2 => Some(SrgbRenderingIntent::Saturation),
3 => Some(SrgbRenderingIntent::AbsoluteColorimetric),
_ => None,
}
}
pub fn encode<W: Write>(self, w: &mut W) -> encoder::Result<()> {
encoder::write_chunk(w, chunk::sRGB, &[self.into_raw()])
}
}
/// PNG info struct
#[derive(Clone, Debug)]
#[non_exhaustive]
pub struct Info<'a> {
pub width: u32,
pub height: u32,
pub bit_depth: BitDepth,
/// How colors are stored in the image.
pub color_type: ColorType,
pub interlaced: bool,
/// The image's `tRNS` chunk, if present; contains the alpha channel of the image's palette, 1 byte per entry.
pub trns: Option<Cow<'a, [u8]>>,
pub pixel_dims: Option<PixelDimensions>,
/// The image's `PLTE` chunk, if present; contains the RGB channels (in that order) of the image's palettes, 3 bytes per entry (1 per channel).
pub palette: Option<Cow<'a, [u8]>>,
/// The contents of the image's gAMA chunk, if present.
/// Prefer `source_gamma` to also get the derived replacement gamma from sRGB chunks.
pub gama_chunk: Option<ScaledFloat>,
/// The contents of the image's `cHRM` chunk, if present.
/// Prefer `source_chromaticities` to also get the derived replacements from sRGB chunks.
pub chrm_chunk: Option<SourceChromaticities>,
pub frame_control: Option<FrameControl>,
pub animation_control: Option<AnimationControl>,
pub compression: Compression,
/// Gamma of the source system.
/// Set by both `gAMA` as well as to a replacement by `sRGB` chunk.
pub source_gamma: Option<ScaledFloat>,
/// Chromaticities of the source system.
/// Set by both `cHRM` as well as to a replacement by `sRGB` chunk.
pub source_chromaticities: Option<SourceChromaticities>,
/// The rendering intent of an SRGB image.
///
/// Presence of this value also indicates that the image conforms to the SRGB color space.
pub srgb: Option<SrgbRenderingIntent>,
/// The ICC profile for the image.
pub icc_profile: Option<Cow<'a, [u8]>>,
/// tEXt field
pub uncompressed_latin1_text: Vec<TEXtChunk>,
/// zTXt field
pub compressed_latin1_text: Vec<ZTXtChunk>,
/// iTXt field
pub utf8_text: Vec<ITXtChunk>,
}
impl Default for Info<'_> {
fn default() -> Info<'static> {
Info {
width: 0,
height: 0,
bit_depth: BitDepth::Eight,
color_type: ColorType::Grayscale,
interlaced: false,
palette: None,
trns: None,
gama_chunk: None,
chrm_chunk: None,
pixel_dims: None,
frame_control: None,
animation_control: None,
// Default to `deflate::Compression::Fast` and `filter::FilterType::Sub`
// to maintain backward compatible output.
compression: Compression::Fast,
source_gamma: None,
source_chromaticities: None,
srgb: None,
icc_profile: None,
uncompressed_latin1_text: Vec::new(),
compressed_latin1_text: Vec::new(),
utf8_text: Vec::new(),
}
}
}
impl Info<'_> {
/// A utility constructor for a default info with width and height.
pub fn with_size(width: u32, height: u32) -> Self {
Info {
width,
height,
..Default::default()
}
}
/// Size of the image, width then height.
pub fn size(&self) -> (u32, u32) {
(self.width, self.height)
}
/// Returns true if the image is an APNG image.
pub fn is_animated(&self) -> bool {
self.frame_control.is_some() && self.animation_control.is_some()
}
/// Returns the frame control information of the image.
pub fn animation_control(&self) -> Option<&AnimationControl> {
self.animation_control.as_ref()
}
/// Returns the frame control information of the current frame
pub fn frame_control(&self) -> Option<&FrameControl> {
self.frame_control.as_ref()
}
/// Returns the number of bits per pixel.
pub fn bits_per_pixel(&self) -> usize {
self.color_type.samples() * self.bit_depth as usize
}
/// Returns the number of bytes per pixel.
pub fn bytes_per_pixel(&self) -> usize {
// If adjusting this for expansion or other transformation passes, remember to keep the old
// implementation for bpp_in_prediction, which is internal to the png specification.
self.color_type.samples() * ((self.bit_depth as usize + 7) >> 3)
}
/// Return the number of bytes for this pixel used in prediction.
///
/// Some filters use prediction, over the raw bytes of a scanline. Where a previous pixel is
/// require for such forms the specification instead references previous bytes. That is, for
/// a gray pixel of bit depth 2, the pixel used in prediction is actually 4 pixels prior. This
/// has the consequence that the number of possible values is rather small. To make this fact
/// more obvious in the type system and the optimizer we use an explicit enum here.
pub(crate) fn bpp_in_prediction(&self) -> BytesPerPixel {
match self.bytes_per_pixel() {
1 => BytesPerPixel::One,
2 => BytesPerPixel::Two,
3 => BytesPerPixel::Three,
4 => BytesPerPixel::Four,
6 => BytesPerPixel::Six, // Only rgb×16bit
8 => BytesPerPixel::Eight, // Only rgba×16bit
_ => unreachable!("Not a possible byte rounded pixel width"),
}
}
/// Returns the number of bytes needed for one deinterlaced image.
pub fn raw_bytes(&self) -> usize {
self.height as usize * self.raw_row_length()
}
/// Returns the number of bytes needed for one deinterlaced row.
pub fn raw_row_length(&self) -> usize {
self.raw_row_length_from_width(self.width)
}
pub(crate) fn checked_raw_row_length(&self) -> Option<usize> {
self.color_type
.checked_raw_row_length(self.bit_depth, self.width)
}
/// Returns the number of bytes needed for one deinterlaced row of width `width`.
pub fn raw_row_length_from_width(&self, width: u32) -> usize {
self.color_type
.raw_row_length_from_width(self.bit_depth, width)
}
/// Encode this header to the writer.
///
/// Note that this does _not_ include the PNG signature, it starts with the IHDR chunk and then
/// includes other chunks that were added to the header.
pub fn encode<W: Write>(&self, mut w: W) -> encoder::Result<()> {
// Encode the IHDR chunk
let mut data = [0; 13];
data[..4].copy_from_slice(&self.width.to_be_bytes());
data[4..8].copy_from_slice(&self.height.to_be_bytes());
data[8] = self.bit_depth as u8;
data[9] = self.color_type as u8;
data[12] = self.interlaced as u8;
encoder::write_chunk(&mut w, chunk::IHDR, &data)?;
if let Some(p) = &self.palette {
encoder::write_chunk(&mut w, chunk::PLTE, p)?;
};
if let Some(t) = &self.trns {
encoder::write_chunk(&mut w, chunk::tRNS, t)?;
}
// If specified, the sRGB information overrides the source gamma and chromaticities.
if let Some(srgb) = &self.srgb {
let gamma = crate::srgb::substitute_gamma();
let chromaticities = crate::srgb::substitute_chromaticities();
srgb.encode(&mut w)?;
gamma.encode_gama(&mut w)?;
chromaticities.encode(&mut w)?;
} else {
if let Some(gma) = self.source_gamma {
gma.encode_gama(&mut w)?
}
if let Some(chrms) = self.source_chromaticities {
chrms.encode(&mut w)?;
}
}
if let Some(actl) = self.animation_control {
actl.encode(&mut w)?;
}
for text_chunk in &self.uncompressed_latin1_text {
text_chunk.encode(&mut w)?;
}
for text_chunk in &self.compressed_latin1_text {
text_chunk.encode(&mut w)?;
}
for text_chunk in &self.utf8_text {
text_chunk.encode(&mut w)?;
}
Ok(())
}
}
impl BytesPerPixel {
pub(crate) fn into_usize(self) -> usize {
self as usize
}
}
bitflags! {
/// Output transformations
///
/// Many flags from libpng are not yet supported. A PR discussing/adding them would be nice.
///
#[doc = "
```c
/// Discard the alpha channel
const STRIP_ALPHA = 0x0002; // read only
/// Expand 1; 2 and 4-bit samples to bytes
const PACKING = 0x0004; // read and write
/// Change order of packed pixels to LSB first
const PACKSWAP = 0x0008; // read and write
/// Invert monochrome images
const INVERT_MONO = 0x0020; // read and write
/// Normalize pixels to the sBIT depth
const SHIFT = 0x0040; // read and write
/// Flip RGB to BGR; RGBA to BGRA
const BGR = 0x0080; // read and write
/// Flip RGBA to ARGB or GA to AG
const SWAP_ALPHA = 0x0100; // read and write
/// Byte-swap 16-bit samples
const SWAP_ENDIAN = 0x0200; // read and write
/// Change alpha from opacity to transparency
const INVERT_ALPHA = 0x0400; // read and write
const STRIP_FILLER = 0x0800; // write only
const STRIP_FILLER_BEFORE = 0x0800; // write only
const STRIP_FILLER_AFTER = 0x1000; // write only
const GRAY_TO_RGB = 0x2000; // read only
const EXPAND_16 = 0x4000; // read only
/// Similar to STRIP_16 but in libpng considering gamma?
/// Not entirely sure the documentation says it is more
/// accurate but doesn't say precisely how.
const SCALE_16 = 0x8000; // read only
```
"]
pub struct Transformations: u32 {
/// No transformation
const IDENTITY = 0x0000; // read and write */
/// Strip 16-bit samples to 8 bits
const STRIP_16 = 0x0001; // read only */
/// Expand paletted images to RGB; expand grayscale images of
/// less than 8-bit depth to 8-bit depth; and expand tRNS chunks
/// to alpha channels.
const EXPAND = 0x0010; // read only */
}
}
impl Transformations {
/// Transform every input to 8bit grayscale or color.
///
/// This sets `EXPAND` and `STRIP_16` which is similar to the default transformation used by
/// this library prior to `0.17`.
pub fn normalize_to_color8() -> Transformations {
Transformations::EXPAND | Transformations::STRIP_16
}
}
/// Instantiate the default transformations, the identity transform.
impl Default for Transformations {
fn default() -> Transformations {
Transformations::IDENTITY
}
}
#[derive(Debug)]
pub struct ParameterError {
inner: ParameterErrorKind,
}
#[derive(Debug)]
pub(crate) enum ParameterErrorKind {
/// A provided buffer must be have the exact size to hold the image data. Where the buffer can
/// be allocated by the caller, they must ensure that it has a minimum size as hinted previously.
/// Even though the size is calculated from image data, this does counts as a parameter error
/// because they must react to a value produced by this library, which can have been subjected
/// to limits.
ImageBufferSize { expected: usize, actual: usize },
/// A bit like return `None` from an iterator.
/// We use it to differentiate between failing to seek to the next image in a sequence and the
/// absence of a next image. This is an error of the caller because they should have checked
/// the number of images by inspecting the header data returned when opening the image. This
/// library will perform the checks necessary to ensure that data was accurate or error with a
/// format error otherwise.
PolledAfterEndOfImage,
}
impl From<ParameterErrorKind> for ParameterError {
fn from(inner: ParameterErrorKind) -> Self {
ParameterError { inner }
}
}
impl fmt::Display for ParameterError {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
use ParameterErrorKind::*;
match self.inner {
ImageBufferSize { expected, actual } => {
write!(fmt, "wrong data size, expected {} got {}", expected, actual)
}
PolledAfterEndOfImage => write!(fmt, "End of image has been reached"),
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,204 @@
use super::{stream::FormatErrorInner, DecodingError, CHUNCK_BUFFER_SIZE};
use miniz_oxide::inflate::core::{decompress, inflate_flags, DecompressorOxide};
use miniz_oxide::inflate::TINFLStatus;
/// Ergonomics wrapper around `miniz_oxide::inflate::stream` for zlib compressed data.
pub(super) struct ZlibStream {
/// Current decoding state.
state: Box<DecompressorOxide>,
/// If there has been a call to decompress already.
started: bool,
/// A buffer of compressed data.
/// We use this for a progress guarantee. The data in the input stream is chunked as given by
/// the underlying stream buffer. We will not read any more data until the current buffer has
/// been fully consumed. The zlib decompression can not fully consume all the data when it is
/// in the middle of the stream, it will treat full symbols and maybe the last bytes need to be
/// treated in a special way. The exact reason isn't as important but the interface does not
/// promise us this. Now, the complication is that the _current_ chunking information of PNG
/// alone is not enough to determine this as indeed the compressed stream is the concatenation
/// of all consecutive `IDAT`/`fdAT` chunks. We would need to inspect the next chunk header.
///
/// Thus, there needs to be a buffer that allows fully clearing a chunk so that the next chunk
/// type can be inspected.
in_buffer: Vec<u8>,
/// The logical start of the `in_buffer`.
in_pos: usize,
/// Remaining buffered decoded bytes.
/// The decoder sometimes wants inspect some already finished bytes for further decoding. So we
/// keep a total of 32KB of decoded data available as long as more data may be appended.
out_buffer: Vec<u8>,
/// The cursor position in the output stream as a buffer index.
out_pos: usize,
}
impl ZlibStream {
pub(crate) fn new() -> Self {
ZlibStream {
state: Box::default(),
started: false,
in_buffer: Vec::with_capacity(CHUNCK_BUFFER_SIZE),
in_pos: 0,
out_buffer: vec![0; 2 * CHUNCK_BUFFER_SIZE],
out_pos: 0,
}
}
pub(crate) fn reset(&mut self) {
self.started = false;
self.in_buffer.clear();
self.out_buffer.clear();
self.out_pos = 0;
*self.state = DecompressorOxide::default();
}
/// Fill the decoded buffer as far as possible from `data`.
/// On success returns the number of consumed input bytes.
pub(crate) fn decompress(
&mut self,
data: &[u8],
image_data: &mut Vec<u8>,
) -> Result<usize, DecodingError> {
const BASE_FLAGS: u32 = inflate_flags::TINFL_FLAG_PARSE_ZLIB_HEADER
| inflate_flags::TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF
| inflate_flags::TINFL_FLAG_HAS_MORE_INPUT;
self.prepare_vec_for_appending();
let (status, mut in_consumed, out_consumed) = {
let in_data = if self.in_buffer.is_empty() {
data
} else {
&self.in_buffer[self.in_pos..]
};
decompress(
&mut self.state,
in_data,
self.out_buffer.as_mut_slice(),
self.out_pos,
BASE_FLAGS,
)
};
if !self.in_buffer.is_empty() {
self.in_pos += in_consumed;
}
if self.in_buffer.len() == self.in_pos {
self.in_buffer.clear();
self.in_pos = 0;
}
if in_consumed == 0 {
self.in_buffer.extend_from_slice(data);
in_consumed = data.len();
}
self.started = true;
self.out_pos += out_consumed;
self.transfer_finished_data(image_data);
match status {
TINFLStatus::Done | TINFLStatus::HasMoreOutput | TINFLStatus::NeedsMoreInput => {
Ok(in_consumed)
}
err => Err(DecodingError::Format(
FormatErrorInner::CorruptFlateStream { err }.into(),
)),
}
}
/// Called after all consecutive IDAT chunks were handled.
///
/// The compressed stream can be split on arbitrary byte boundaries. This enables some cleanup
/// within the decompressor and flushing additional data which may have been kept back in case
/// more data were passed to it.
pub(crate) fn finish_compressed_chunks(
&mut self,
image_data: &mut Vec<u8>,
) -> Result<(), DecodingError> {
const BASE_FLAGS: u32 = inflate_flags::TINFL_FLAG_PARSE_ZLIB_HEADER
| inflate_flags::TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF;
if !self.started {
return Ok(());
}
let tail = self.in_buffer.split_off(0);
let tail = &tail[self.in_pos..];
let mut start = 0;
loop {
self.prepare_vec_for_appending();
let (status, in_consumed, out_consumed) = {
// TODO: we may be able to avoid the indirection through the buffer here.
// First append all buffered data and then create a cursor on the image_data
// instead.
decompress(
&mut self.state,
&tail[start..],
self.out_buffer.as_mut_slice(),
self.out_pos,
BASE_FLAGS,
)
};
start += in_consumed;
self.out_pos += out_consumed;
match status {
TINFLStatus::Done => {
self.out_buffer.truncate(self.out_pos as usize);
image_data.append(&mut self.out_buffer);
return Ok(());
}
TINFLStatus::HasMoreOutput => {
let transferred = self.transfer_finished_data(image_data);
assert!(
transferred > 0 || in_consumed > 0 || out_consumed > 0,
"No more forward progress made in stream decoding."
);
}
err => {
return Err(DecodingError::Format(
FormatErrorInner::CorruptFlateStream { err }.into(),
));
}
}
}
}
/// Resize the vector to allow allocation of more data.
fn prepare_vec_for_appending(&mut self) {
if self.out_buffer.len().saturating_sub(self.out_pos) >= CHUNCK_BUFFER_SIZE {
return;
}
let buffered_len = self.decoding_size(self.out_buffer.len());
debug_assert!(self.out_buffer.len() <= buffered_len);
self.out_buffer.resize(buffered_len, 0u8);
}
fn decoding_size(&self, len: usize) -> usize {
// Allocate one more chunk size than currently or double the length while ensuring that the
// allocation is valid and that any cursor within it will be valid.
len
// This keeps the buffer size a power-of-two, required by miniz_oxide.
.saturating_add(CHUNCK_BUFFER_SIZE.max(len))
// Ensure all buffer indices are valid cursor positions.
// Note: both cut off and zero extension give correct results.
.min(u64::max_value() as usize)
// Ensure the allocation request is valid.
// TODO: maximum allocation limits?
.min(isize::max_value() as usize)
}
fn transfer_finished_data(&mut self, image_data: &mut Vec<u8>) -> usize {
let safe = self.out_pos.saturating_sub(CHUNCK_BUFFER_SIZE);
// TODO: allocation limits.
image_data.extend(self.out_buffer.drain(..safe));
self.out_pos -= safe;
safe
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,411 @@
use crate::common::BytesPerPixel;
/// The byte level filter applied to scanlines to prepare them for compression.
///
/// Compression in general benefits from repetitive data. The filter is a content-aware method of
/// compressing the range of occurring byte values to help the compression algorithm. Note that
/// this does not operate on pixels but on raw bytes of a scanline.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
pub enum FilterType {
NoFilter = 0,
Sub = 1,
Up = 2,
Avg = 3,
Paeth = 4,
}
impl Default for FilterType {
fn default() -> Self {
FilterType::Sub
}
}
impl FilterType {
/// u8 -> Self. Temporary solution until Rust provides a canonical one.
pub fn from_u8(n: u8) -> Option<FilterType> {
match n {
0 => Some(FilterType::NoFilter),
1 => Some(FilterType::Sub),
2 => Some(FilterType::Up),
3 => Some(FilterType::Avg),
4 => Some(FilterType::Paeth),
_ => None,
}
}
}
/// The filtering method for preprocessing scanline data before compression.
///
/// Adaptive filtering performs additional computation in an attempt to maximize
/// the compression of the data. [`NonAdaptive`] filtering is the default.
///
/// [`NonAdaptive`]: enum.AdaptiveFilterType.html#variant.NonAdaptive
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
pub enum AdaptiveFilterType {
Adaptive,
NonAdaptive,
}
impl Default for AdaptiveFilterType {
fn default() -> Self {
AdaptiveFilterType::NonAdaptive
}
}
fn filter_paeth(a: u8, b: u8, c: u8) -> u8 {
let ia = i16::from(a);
let ib = i16::from(b);
let ic = i16::from(c);
let p = ia + ib - ic;
let pa = (p - ia).abs();
let pb = (p - ib).abs();
let pc = (p - ic).abs();
if pa <= pb && pa <= pc {
a
} else if pb <= pc {
b
} else {
c
}
}
pub(crate) fn unfilter(
filter: FilterType,
tbpp: BytesPerPixel,
previous: &[u8],
current: &mut [u8],
) -> std::result::Result<(), &'static str> {
use self::FilterType::*;
let bpp = tbpp.into_usize();
let len = current.len();
fn require_length(slice: &[u8], length: usize) -> Result<&[u8], &'static str> {
match slice.get(..length) {
None => Err("Filtering failed: not enough data in previous row"),
Some(slice) => Ok(slice),
}
}
match filter {
NoFilter => Ok(()),
Sub => {
let current = &mut current[..len];
for i in bpp..len {
current[i] = current[i].wrapping_add(current[i - bpp]);
}
Ok(())
}
Up => {
let current = &mut current[..len];
let previous = require_length(previous, len)?;
for i in 0..len {
current[i] = current[i].wrapping_add(previous[i]);
}
Ok(())
}
Avg => {
let current = &mut current[..len];
let previous = require_length(previous, len)?;
if bpp > len {
return Err("Filtering failed: bytes per pixel is greater than length of row");
}
for i in 0..bpp {
current[i] = current[i].wrapping_add(previous[i] / 2);
}
macro_rules! avg_tail {
($name:ident, $bpp:expr) => {
fn $name(current: &mut [u8], previous: &[u8]) {
let len = current.len();
let current = &mut current[..len];
let previous = &previous[..len];
let mut current = current.chunks_exact_mut($bpp);
let mut previous = previous.chunks_exact($bpp);
let mut lprevious = current.next().unwrap();
let _ = previous.next();
while let Some(pprevious) = previous.next() {
let pcurrent = current.next().unwrap();
for i in 0..$bpp {
let lprev = lprevious[i];
let pprev = pprevious[i];
pcurrent[i] = pcurrent[i].wrapping_add(
((u16::from(lprev) + u16::from(pprev)) / 2) as u8,
);
}
lprevious = pcurrent;
}
}
};
}
avg_tail!(avg_tail_8, 8);
avg_tail!(avg_tail_6, 6);
avg_tail!(avg_tail_4, 4);
avg_tail!(avg_tail_3, 3);
avg_tail!(avg_tail_2, 2);
avg_tail!(avg_tail_1, 1);
match tbpp {
BytesPerPixel::Eight => avg_tail_8(current, previous),
BytesPerPixel::Six => avg_tail_6(current, previous),
BytesPerPixel::Four => avg_tail_4(current, previous),
BytesPerPixel::Three => avg_tail_3(current, previous),
BytesPerPixel::Two => avg_tail_2(current, previous),
BytesPerPixel::One => avg_tail_1(current, previous),
}
Ok(())
}
Paeth => {
let current = &mut current[..len];
let previous = require_length(previous, len)?;
if bpp > len {
return Err("Filtering failed: bytes per pixel is greater than length of row");
}
for i in 0..bpp {
current[i] = current[i].wrapping_add(filter_paeth(0, previous[i], 0));
}
let mut current = current.chunks_exact_mut(bpp);
let mut previous = previous.chunks_exact(bpp);
let mut lprevious = current.next().unwrap();
let mut lpprevious = previous.next().unwrap();
for pprevious in previous {
let pcurrent = current.next().unwrap();
for i in 0..bpp {
pcurrent[i] = pcurrent[i].wrapping_add(filter_paeth(
lprevious[i],
pprevious[i],
lpprevious[i],
));
}
lprevious = pcurrent;
lpprevious = pprevious;
}
Ok(())
}
}
}
fn filter_internal(
method: FilterType,
bpp: usize,
len: usize,
previous: &[u8],
current: &mut [u8],
) -> FilterType {
use self::FilterType::*;
match method {
NoFilter => NoFilter,
Sub => {
for i in (bpp..len).rev() {
current[i] = current[i].wrapping_sub(current[i - bpp]);
}
Sub
}
Up => {
for i in 0..len {
current[i] = current[i].wrapping_sub(previous[i]);
}
Up
}
Avg => {
for i in (bpp..len).rev() {
current[i] = current[i].wrapping_sub(
((u16::from(current[i - bpp]) + u16::from(previous[i])) / 2) as u8,
);
}
for i in 0..bpp {
current[i] = current[i].wrapping_sub(previous[i] / 2);
}
Avg
}
Paeth => {
for i in (bpp..len).rev() {
current[i] = current[i].wrapping_sub(filter_paeth(
current[i - bpp],
previous[i],
previous[i - bpp],
));
}
for i in 0..bpp {
current[i] = current[i].wrapping_sub(filter_paeth(0, previous[i], 0));
}
Paeth
}
}
}
pub(crate) fn filter(
method: FilterType,
adaptive: AdaptiveFilterType,
bpp: BytesPerPixel,
previous: &[u8],
current: &mut [u8],
) -> FilterType {
use FilterType::*;
let bpp = bpp.into_usize();
let len = current.len();
match adaptive {
AdaptiveFilterType::NonAdaptive => filter_internal(method, bpp, len, previous, current),
AdaptiveFilterType::Adaptive => {
// Filter the current buffer with each filter type. Sum the absolute
// values of each filtered buffer treating the bytes as signed
// integers. Choose the filter with the smallest sum.
let mut filtered_buffer = vec![0; len];
filtered_buffer.copy_from_slice(current);
let mut scratch = vec![0; len];
// Initialize min_sum with the NoFilter buffer sum
let mut min_sum: usize = sum_buffer(&filtered_buffer);
let mut filter_choice = FilterType::NoFilter;
for &filter in [Sub, Up, Avg, Paeth].iter() {
scratch.copy_from_slice(current);
filter_internal(filter, bpp, len, previous, &mut scratch);
let sum = sum_buffer(&scratch);
if sum < min_sum {
min_sum = sum;
filter_choice = filter;
core::mem::swap(&mut filtered_buffer, &mut scratch);
}
}
current.copy_from_slice(&filtered_buffer);
filter_choice
}
}
}
// Helper function for Adaptive filter buffer summation
fn sum_buffer(buf: &[u8]) -> usize {
buf.iter().fold(0, |acc, &x| {
acc.saturating_add(i16::from(x as i8).abs() as usize)
})
}
#[cfg(test)]
mod test {
use super::{filter, unfilter, AdaptiveFilterType, BytesPerPixel, FilterType};
use core::iter;
#[test]
fn roundtrip() {
// A multiple of 8, 6, 4, 3, 2, 1
const LEN: u8 = 240;
let previous: Vec<_> = iter::repeat(1).take(LEN.into()).collect();
let mut current: Vec<_> = (0..LEN).collect();
let expected = current.clone();
let adaptive = AdaptiveFilterType::NonAdaptive;
let mut roundtrip = |kind, bpp: BytesPerPixel| {
filter(kind, adaptive, bpp, &previous, &mut current);
unfilter(kind, bpp, &previous, &mut current).expect("Unfilter worked");
assert_eq!(
current, expected,
"Filtering {:?} with {:?} does not roundtrip",
bpp, kind
);
};
let filters = [
FilterType::NoFilter,
FilterType::Sub,
FilterType::Up,
FilterType::Avg,
FilterType::Paeth,
];
let bpps = [
BytesPerPixel::One,
BytesPerPixel::Two,
BytesPerPixel::Three,
BytesPerPixel::Four,
BytesPerPixel::Six,
BytesPerPixel::Eight,
];
for &filter in filters.iter() {
for &bpp in bpps.iter() {
roundtrip(filter, bpp);
}
}
}
#[test]
fn roundtrip_ascending_previous_line() {
// A multiple of 8, 6, 4, 3, 2, 1
const LEN: u8 = 240;
let previous: Vec<_> = (0..LEN).collect();
let mut current: Vec<_> = (0..LEN).collect();
let expected = current.clone();
let adaptive = AdaptiveFilterType::NonAdaptive;
let mut roundtrip = |kind, bpp: BytesPerPixel| {
filter(kind, adaptive, bpp, &previous, &mut current);
unfilter(kind, bpp, &previous, &mut current).expect("Unfilter worked");
assert_eq!(
current, expected,
"Filtering {:?} with {:?} does not roundtrip",
bpp, kind
);
};
let filters = [
FilterType::NoFilter,
FilterType::Sub,
FilterType::Up,
FilterType::Avg,
FilterType::Paeth,
];
let bpps = [
BytesPerPixel::One,
BytesPerPixel::Two,
BytesPerPixel::Three,
BytesPerPixel::Four,
BytesPerPixel::Six,
BytesPerPixel::Eight,
];
for &filter in filters.iter() {
for &bpp in bpps.iter() {
roundtrip(filter, bpp);
}
}
}
#[test]
// This tests that converting u8 to i8 doesn't overflow when taking the
// absolute value for adaptive filtering: -128_i8.abs() will panic in debug
// or produce garbage in release mode. The sum of 0..=255u8 should equal the
// sum of the absolute values of -128_i8..=127, or abs(-128..=0) + 1..=127.
fn sum_buffer_test() {
let sum = (0..=128).sum::<usize>() + (1..=127).sum::<usize>();
let buf: Vec<u8> = (0_u8..=255).collect();
assert_eq!(sum, crate::filter::sum_buffer(&buf));
}
}

View File

@@ -0,0 +1,81 @@
//! # PNG encoder and decoder
//!
//! This crate contains a PNG encoder and decoder. It supports reading of single lines or whole frames.
//!
//! ## The decoder
//!
//! The most important types for decoding purposes are [`Decoder`](struct.Decoder.html) and
//! [`Reader`](struct.Reader.html). They both wrap a `std::io::Read`.
//! `Decoder` serves as a builder for `Reader`. Calling `Decoder::read_info` reads from the `Read` until the
//! image data is reached.
//!
//! ### Using the decoder
//! ```
//! use std::fs::File;
//! // The decoder is a build for reader and can be used to set various decoding options
//! // via `Transformations`. The default output transformation is `Transformations::IDENTITY`.
//! let decoder = png::Decoder::new(File::open("tests/pngsuite/basi0g01.png").unwrap());
//! let mut reader = decoder.read_info().unwrap();
//! // Allocate the output buffer.
//! let mut buf = vec![0; reader.output_buffer_size()];
//! // Read the next frame. An APNG might contain multiple frames.
//! let info = reader.next_frame(&mut buf).unwrap();
//! // Grab the bytes of the image.
//! let bytes = &buf[..info.buffer_size()];
//! // Inspect more details of the last read frame.
//! let in_animation = reader.info().frame_control.is_some();
//! ```
//!
//! ## Encoder
//! ### Using the encoder
//!
//! ```no_run
//! // For reading and opening files
//! use std::path::Path;
//! use std::fs::File;
//! use std::io::BufWriter;
//!
//! let path = Path::new(r"/path/to/image.png");
//! let file = File::create(path).unwrap();
//! let ref mut w = BufWriter::new(file);
//!
//! let mut encoder = png::Encoder::new(w, 2, 1); // Width is 2 pixels and height is 1.
//! encoder.set_color(png::ColorType::Rgba);
//! encoder.set_depth(png::BitDepth::Eight);
//! encoder.set_source_gamma(png::ScaledFloat::from_scaled(45455)); // 1.0 / 2.2, scaled by 100000
//! encoder.set_source_gamma(png::ScaledFloat::new(1.0 / 2.2)); // 1.0 / 2.2, unscaled, but rounded
//! let source_chromaticities = png::SourceChromaticities::new( // Using unscaled instantiation here
//! (0.31270, 0.32900),
//! (0.64000, 0.33000),
//! (0.30000, 0.60000),
//! (0.15000, 0.06000)
//! );
//! encoder.set_source_chromaticities(source_chromaticities);
//! let mut writer = encoder.write_header().unwrap();
//!
//! let data = [255, 0, 0, 255, 0, 0, 0, 255]; // An array containing a RGBA sequence. First pixel is red and second pixel is black.
//! writer.write_image_data(&data).unwrap(); // Save
//! ```
//!
#![forbid(unsafe_code)]
#[macro_use]
extern crate bitflags;
pub mod chunk;
mod common;
mod decoder;
mod encoder;
mod filter;
mod srgb;
pub mod text_metadata;
mod traits;
mod utils;
pub use crate::{
common::*,
decoder::{Decoded, Decoder, DecodingError, Limits, OutputInfo, Reader, StreamingDecoder},
encoder::{Encoder, EncodingError, StreamWriter, Writer},
filter::{AdaptiveFilterType, FilterType},
};

View File

@@ -0,0 +1,30 @@
use crate::{ScaledFloat, SourceChromaticities};
/// Get the gamma that should be substituted for images conforming to the sRGB color space.
pub fn substitute_gamma() -> ScaledFloat {
// Value taken from https://www.w3.org/TR/2003/REC-PNG-20031110/#11sRGB
ScaledFloat::from_scaled(45455)
}
/// Get the chromaticities that should be substituted for images conforming to the sRGB color space.
pub fn substitute_chromaticities() -> SourceChromaticities {
// Values taken from https://www.w3.org/TR/2003/REC-PNG-20031110/#11sRGB
SourceChromaticities {
white: (
ScaledFloat::from_scaled(31270),
ScaledFloat::from_scaled(32900),
),
red: (
ScaledFloat::from_scaled(64000),
ScaledFloat::from_scaled(33000),
),
green: (
ScaledFloat::from_scaled(30000),
ScaledFloat::from_scaled(60000),
),
blue: (
ScaledFloat::from_scaled(15000),
ScaledFloat::from_scaled(6000),
),
}
}

View File

@@ -0,0 +1,586 @@
//! # Text chunks (tEXt/zTXt/iTXt) structs and functions
//!
//! The [PNG spec](https://www.w3.org/TR/2003/REC-PNG-20031110/#11textinfo) optionally allows for
//! embedded text chunks in the file. They may appear either before or after the image data
//! chunks. There are three kinds of text chunks.
//! - `tEXt`: This has a `keyword` and `text` field, and is ISO 8859-1 encoded.
//! - `zTXt`: This is semantically the same as `tEXt`, i.e. it has the same fields and
//! encoding, but the `text` field is compressed before being written into the PNG file.
//! - `iTXt`: This chunk allows for its `text` field to be any valid UTF-8, and supports
//! compression of the text field as well.
//!
//! The `ISO 8859-1` encoding technically doesn't allow any control characters
//! to be used, but in practice these values are encountered anyway. This can
//! either be the extended `ISO-8859-1` encoding with control characters or the
//! `Windows-1252` encoding. This crate assumes the `ISO-8859-1` encoding is
//! used.
//!
//! ## Reading text chunks
//!
//! As a PNG is decoded, any text chunk encountered is appended the
//! [`Info`](`crate::common::Info`) struct, in the `uncompressed_latin1_text`,
//! `compressed_latin1_text`, and the `utf8_text` fields depending on whether the encountered
//! chunk is `tEXt`, `zTXt`, or `iTXt`.
//!
//! ```
//! use std::fs::File;
//! use std::iter::FromIterator;
//! use std::path::PathBuf;
//!
//! // Opening a png file that has a zTXt chunk
//! let decoder = png::Decoder::new(
//! File::open(PathBuf::from_iter([
//! "tests",
//! "text_chunk_examples",
//! "ztxt_example.png",
//! ]))
//! .unwrap(),
//! );
//! let mut reader = decoder.read_info().unwrap();
//! // If the text chunk is before the image data frames, `reader.info()` already contains the text.
//! for text_chunk in &reader.info().compressed_latin1_text {
//! println!("{:?}", text_chunk.keyword); // Prints the keyword
//! println!("{:#?}", text_chunk); // Prints out the text chunk.
//! // To get the uncompressed text, use the `get_text` method.
//! println!("{}", text_chunk.get_text().unwrap());
//! }
//! ```
//!
//! ## Writing text chunks
//!
//! There are two ways to write text chunks: the first is to add the appropriate text structs directly to the encoder header before the header is written to file.
//! To add a text chunk at any point in the stream, use the `write_text_chunk` method.
//!
//! ```
//! # use png::text_metadata::{ITXtChunk, ZTXtChunk};
//! # use std::env;
//! # use std::fs::File;
//! # use std::io::BufWriter;
//! # use std::iter::FromIterator;
//! # use std::path::PathBuf;
//! # let file = File::create(PathBuf::from_iter(["target", "text_chunk.png"])).unwrap();
//! # let ref mut w = BufWriter::new(file);
//! let mut encoder = png::Encoder::new(w, 2, 1); // Width is 2 pixels and height is 1.
//! encoder.set_color(png::ColorType::Rgba);
//! encoder.set_depth(png::BitDepth::Eight);
//! // Adding text chunks to the header
//! encoder
//! .add_text_chunk(
//! "Testing tEXt".to_string(),
//! "This is a tEXt chunk that will appear before the IDAT chunks.".to_string(),
//! )
//! .unwrap();
//! encoder
//! .add_ztxt_chunk(
//! "Testing zTXt".to_string(),
//! "This is a zTXt chunk that is compressed in the png file.".to_string(),
//! )
//! .unwrap();
//! encoder
//! .add_itxt_chunk(
//! "Testing iTXt".to_string(),
//! "iTXt chunks support all of UTF8. Example: हिंदी.".to_string(),
//! )
//! .unwrap();
//!
//! let mut writer = encoder.write_header().unwrap();
//!
//! let data = [255, 0, 0, 255, 0, 0, 0, 255]; // An array containing a RGBA sequence. First pixel is red and second pixel is black.
//! writer.write_image_data(&data).unwrap(); // Save
//!
//! // We can add a tEXt/zTXt/iTXt at any point before the encoder is dropped from scope. These chunks will be at the end of the png file.
//! let tail_ztxt_chunk = ZTXtChunk::new("Comment".to_string(), "A zTXt chunk after the image data.".to_string());
//! writer.write_text_chunk(&tail_ztxt_chunk).unwrap();
//!
//! // The fields of the text chunk are public, so they can be mutated before being written to the file.
//! let mut tail_itxt_chunk = ITXtChunk::new("Author".to_string(), "सायंतन खान".to_string());
//! tail_itxt_chunk.compressed = true;
//! tail_itxt_chunk.language_tag = "hi".to_string();
//! tail_itxt_chunk.translated_keyword = "लेखक".to_string();
//! writer.write_text_chunk(&tail_itxt_chunk).unwrap();
//! ```
#![warn(missing_docs)]
use crate::{chunk, encoder, DecodingError, EncodingError};
use flate2::write::ZlibEncoder;
use flate2::Compression;
use miniz_oxide::inflate::{decompress_to_vec_zlib, decompress_to_vec_zlib_with_limit};
use std::{convert::TryFrom, io::Write};
/// Default decompression limit for compressed text chunks.
pub const DECOMPRESSION_LIMIT: usize = 2097152; // 2 MiB
/// Text encoding errors that is wrapped by the standard EncodingError type
#[derive(Debug, Clone, Copy)]
pub(crate) enum TextEncodingError {
/// Unrepresentable characters in string
Unrepresentable,
/// Keyword longer than 79 bytes or empty
InvalidKeywordSize,
/// Error encountered while compressing text
CompressionError,
}
/// Text decoding error that is wrapped by the standard DecodingError type
#[derive(Debug, Clone, Copy)]
pub(crate) enum TextDecodingError {
/// Unrepresentable characters in string
Unrepresentable,
/// Keyword longer than 79 bytes or empty
InvalidKeywordSize,
/// Missing null separator
MissingNullSeparator,
/// Compressed text cannot be uncompressed
InflationError,
/// Needs more space to decompress
OutOfDecompressionSpace,
/// Using an unspecified value for the compression method
InvalidCompressionMethod,
/// Using a byte that is not 0 or 255 as compression flag in iTXt chunk
InvalidCompressionFlag,
/// Missing the compression flag
MissingCompressionFlag,
}
/// A generalized text chunk trait
pub trait EncodableTextChunk {
/// Encode text chunk as Vec<u8> to a `Write`
fn encode<W: Write>(&self, w: &mut W) -> Result<(), EncodingError>;
}
/// Struct representing a tEXt chunk
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct TEXtChunk {
/// Keyword field of the tEXt chunk. Needs to be between 1-79 bytes when encoded as Latin-1.
pub keyword: String,
/// Text field of tEXt chunk. Can be at most 2GB.
pub text: String,
}
fn decode_iso_8859_1(text: &[u8]) -> String {
text.iter().map(|&b| b as char).collect()
}
fn encode_iso_8859_1(text: &str) -> Result<Vec<u8>, TextEncodingError> {
encode_iso_8859_1_iter(text).collect()
}
fn encode_iso_8859_1_into(buf: &mut Vec<u8>, text: &str) -> Result<(), TextEncodingError> {
for b in encode_iso_8859_1_iter(text) {
buf.push(b?);
}
Ok(())
}
fn encode_iso_8859_1_iter(text: &str) -> impl Iterator<Item = Result<u8, TextEncodingError>> + '_ {
text.chars()
.map(|c| u8::try_from(c as u32).map_err(|_| TextEncodingError::Unrepresentable))
}
fn decode_ascii(text: &[u8]) -> Result<&str, TextDecodingError> {
if text.is_ascii() {
// `from_utf8` cannot panic because we're already checked that `text` is ASCII-7.
// And this is the only safe way to get ASCII-7 string from `&[u8]`.
Ok(std::str::from_utf8(text).expect("unreachable"))
} else {
Err(TextDecodingError::Unrepresentable)
}
}
impl TEXtChunk {
/// Constructs a new TEXtChunk.
/// Not sure whether it should take &str or String.
pub fn new(keyword: impl Into<String>, text: impl Into<String>) -> Self {
Self {
keyword: keyword.into(),
text: text.into(),
}
}
/// Decodes a slice of bytes to a String using Latin-1 decoding.
/// The decoder runs in strict mode, and any decoding errors are passed along to the caller.
pub(crate) fn decode(
keyword_slice: &[u8],
text_slice: &[u8],
) -> Result<Self, TextDecodingError> {
if keyword_slice.is_empty() || keyword_slice.len() > 79 {
return Err(TextDecodingError::InvalidKeywordSize);
}
Ok(Self {
keyword: decode_iso_8859_1(keyword_slice),
text: decode_iso_8859_1(text_slice),
})
}
}
impl EncodableTextChunk for TEXtChunk {
/// Encodes TEXtChunk to a Writer. The keyword and text are separated by a byte of zeroes.
fn encode<W: Write>(&self, w: &mut W) -> Result<(), EncodingError> {
let mut data = encode_iso_8859_1(&self.keyword)?;
if data.is_empty() || data.len() > 79 {
return Err(TextEncodingError::InvalidKeywordSize.into());
}
data.push(0);
encode_iso_8859_1_into(&mut data, &self.text)?;
encoder::write_chunk(w, chunk::tEXt, &data)
}
}
/// Struct representing a zTXt chunk
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct ZTXtChunk {
/// Keyword field of the tEXt chunk. Needs to be between 1-79 bytes when encoded as Latin-1.
pub keyword: String,
/// Text field of zTXt chunk. It is compressed by default, but can be uncompressed if necessary.
text: OptCompressed,
}
/// Private enum encoding the compressed and uncompressed states of zTXt/iTXt text field.
#[derive(Clone, Debug, PartialEq, Eq)]
enum OptCompressed {
/// Compressed version of text field. Can be at most 2GB.
Compressed(Vec<u8>),
/// Uncompressed text field.
Uncompressed(String),
}
impl ZTXtChunk {
/// Creates a new ZTXt chunk.
pub fn new(keyword: impl Into<String>, text: impl Into<String>) -> Self {
Self {
keyword: keyword.into(),
text: OptCompressed::Uncompressed(text.into()),
}
}
pub(crate) fn decode(
keyword_slice: &[u8],
compression_method: u8,
text_slice: &[u8],
) -> Result<Self, TextDecodingError> {
if keyword_slice.is_empty() || keyword_slice.len() > 79 {
return Err(TextDecodingError::InvalidKeywordSize);
}
if compression_method != 0 {
return Err(TextDecodingError::InvalidCompressionMethod);
}
Ok(Self {
keyword: decode_iso_8859_1(keyword_slice),
text: OptCompressed::Compressed(text_slice.to_vec()),
})
}
/// Decompresses the inner text, mutating its own state. Can only handle decompressed text up to `DECOMPRESSION_LIMIT` bytes.
pub fn decompress_text(&mut self) -> Result<(), DecodingError> {
self.decompress_text_with_limit(DECOMPRESSION_LIMIT)
}
/// Decompresses the inner text, mutating its own state. Can only handle decompressed text up to `limit` bytes.
pub fn decompress_text_with_limit(&mut self, limit: usize) -> Result<(), DecodingError> {
match &self.text {
OptCompressed::Compressed(v) => {
let uncompressed_raw = match decompress_to_vec_zlib_with_limit(&v[..], limit) {
Ok(s) => s,
Err(err) if err.status == miniz_oxide::inflate::TINFLStatus::HasMoreOutput => {
return Err(DecodingError::from(
TextDecodingError::OutOfDecompressionSpace,
));
}
Err(_) => {
return Err(DecodingError::from(TextDecodingError::InflationError));
}
};
self.text = OptCompressed::Uncompressed(decode_iso_8859_1(&uncompressed_raw));
}
OptCompressed::Uncompressed(_) => {}
};
Ok(())
}
/// Decompresses the inner text, and returns it as a `String`.
/// If decompression uses more the 2MiB, first call decompress with limit, and then this method.
pub fn get_text(&self) -> Result<String, DecodingError> {
match &self.text {
OptCompressed::Compressed(v) => {
let uncompressed_raw = decompress_to_vec_zlib(&v[..])
.map_err(|_| DecodingError::from(TextDecodingError::InflationError))?;
Ok(decode_iso_8859_1(&uncompressed_raw))
}
OptCompressed::Uncompressed(s) => Ok(s.clone()),
}
}
/// Compresses the inner text, mutating its own state.
pub fn compress_text(&mut self) -> Result<(), EncodingError> {
match &self.text {
OptCompressed::Uncompressed(s) => {
let uncompressed_raw = encode_iso_8859_1(s)?;
let mut encoder = ZlibEncoder::new(Vec::new(), Compression::fast());
encoder
.write_all(&uncompressed_raw)
.map_err(|_| EncodingError::from(TextEncodingError::CompressionError))?;
self.text = OptCompressed::Compressed(
encoder
.finish()
.map_err(|_| EncodingError::from(TextEncodingError::CompressionError))?,
);
}
OptCompressed::Compressed(_) => {}
}
Ok(())
}
}
impl EncodableTextChunk for ZTXtChunk {
fn encode<W: Write>(&self, w: &mut W) -> Result<(), EncodingError> {
let mut data = encode_iso_8859_1(&self.keyword)?;
if data.is_empty() || data.len() > 79 {
return Err(TextEncodingError::InvalidKeywordSize.into());
}
// Null separator
data.push(0);
// Compression method: the only valid value is 0, as of 2021.
data.push(0);
match &self.text {
OptCompressed::Compressed(v) => {
data.extend_from_slice(&v[..]);
}
OptCompressed::Uncompressed(s) => {
// This code may have a bug. Check for correctness.
let uncompressed_raw = encode_iso_8859_1(s)?;
let mut encoder = ZlibEncoder::new(data, Compression::fast());
encoder
.write_all(&uncompressed_raw)
.map_err(|_| EncodingError::from(TextEncodingError::CompressionError))?;
data = encoder
.finish()
.map_err(|_| EncodingError::from(TextEncodingError::CompressionError))?;
}
};
encoder::write_chunk(w, chunk::zTXt, &data)
}
}
/// Struct encoding an iTXt chunk
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct ITXtChunk {
/// The keyword field. This needs to be between 1-79 bytes when encoded as Latin-1.
pub keyword: String,
/// Indicates whether the text will be (or was) compressed in the PNG.
pub compressed: bool,
/// A hyphen separated list of languages that the keyword is translated to. This is ASCII-7 encoded.
pub language_tag: String,
/// Translated keyword. This is UTF-8 encoded.
pub translated_keyword: String,
/// Text field of iTXt chunk. It is compressed by default, but can be uncompressed if necessary.
text: OptCompressed,
}
impl ITXtChunk {
/// Constructs a new iTXt chunk. Leaves all but keyword and text to default values.
pub fn new(keyword: impl Into<String>, text: impl Into<String>) -> Self {
Self {
keyword: keyword.into(),
compressed: false,
language_tag: "".to_string(),
translated_keyword: "".to_string(),
text: OptCompressed::Uncompressed(text.into()),
}
}
pub(crate) fn decode(
keyword_slice: &[u8],
compression_flag: u8,
compression_method: u8,
language_tag_slice: &[u8],
translated_keyword_slice: &[u8],
text_slice: &[u8],
) -> Result<Self, TextDecodingError> {
if keyword_slice.is_empty() || keyword_slice.len() > 79 {
return Err(TextDecodingError::InvalidKeywordSize);
}
let keyword = decode_iso_8859_1(keyword_slice);
let compressed = match compression_flag {
0 => false,
1 => true,
_ => return Err(TextDecodingError::InvalidCompressionFlag),
};
if compressed && compression_method != 0 {
return Err(TextDecodingError::InvalidCompressionMethod);
}
let language_tag = decode_ascii(language_tag_slice)?.to_owned();
let translated_keyword = std::str::from_utf8(translated_keyword_slice)
.map_err(|_| TextDecodingError::Unrepresentable)?
.to_string();
let text = if compressed {
OptCompressed::Compressed(text_slice.to_vec())
} else {
OptCompressed::Uncompressed(
String::from_utf8(text_slice.to_vec())
.map_err(|_| TextDecodingError::Unrepresentable)?,
)
};
Ok(Self {
keyword,
compressed,
language_tag,
translated_keyword,
text,
})
}
/// Decompresses the inner text, mutating its own state. Can only handle decompressed text up to `DECOMPRESSION_LIMIT` bytes.
pub fn decompress_text(&mut self) -> Result<(), DecodingError> {
self.decompress_text_with_limit(DECOMPRESSION_LIMIT)
}
/// Decompresses the inner text, mutating its own state. Can only handle decompressed text up to `limit` bytes.
pub fn decompress_text_with_limit(&mut self, limit: usize) -> Result<(), DecodingError> {
match &self.text {
OptCompressed::Compressed(v) => {
let uncompressed_raw = match decompress_to_vec_zlib_with_limit(&v[..], limit) {
Ok(s) => s,
Err(err) if err.status == miniz_oxide::inflate::TINFLStatus::HasMoreOutput => {
return Err(DecodingError::from(
TextDecodingError::OutOfDecompressionSpace,
));
}
Err(_) => {
return Err(DecodingError::from(TextDecodingError::InflationError));
}
};
self.text = OptCompressed::Uncompressed(
String::from_utf8(uncompressed_raw)
.map_err(|_| TextDecodingError::Unrepresentable)?,
);
}
OptCompressed::Uncompressed(_) => {}
};
Ok(())
}
/// Decompresses the inner text, and returns it as a `String`.
/// If decompression takes more than 2 MiB, try `decompress_text_with_limit` followed by this method.
pub fn get_text(&self) -> Result<String, DecodingError> {
match &self.text {
OptCompressed::Compressed(v) => {
let uncompressed_raw = decompress_to_vec_zlib(&v[..])
.map_err(|_| DecodingError::from(TextDecodingError::InflationError))?;
String::from_utf8(uncompressed_raw)
.map_err(|_| TextDecodingError::Unrepresentable.into())
}
OptCompressed::Uncompressed(s) => Ok(s.clone()),
}
}
/// Compresses the inner text, mutating its own state.
pub fn compress_text(&mut self) -> Result<(), EncodingError> {
match &self.text {
OptCompressed::Uncompressed(s) => {
let uncompressed_raw = s.as_bytes();
let mut encoder = ZlibEncoder::new(Vec::new(), Compression::fast());
encoder
.write_all(uncompressed_raw)
.map_err(|_| EncodingError::from(TextEncodingError::CompressionError))?;
self.text = OptCompressed::Compressed(
encoder
.finish()
.map_err(|_| EncodingError::from(TextEncodingError::CompressionError))?,
);
}
OptCompressed::Compressed(_) => {}
}
Ok(())
}
}
impl EncodableTextChunk for ITXtChunk {
fn encode<W: Write>(&self, w: &mut W) -> Result<(), EncodingError> {
// Keyword
let mut data = encode_iso_8859_1(&self.keyword)?;
if data.is_empty() || data.len() > 79 {
return Err(TextEncodingError::InvalidKeywordSize.into());
}
// Null separator
data.push(0);
// Compression flag
if self.compressed {
data.push(1);
} else {
data.push(0);
}
// Compression method
data.push(0);
// Language tag
if !self.language_tag.is_ascii() {
return Err(EncodingError::from(TextEncodingError::Unrepresentable));
}
data.extend(self.language_tag.as_bytes());
// Null separator
data.push(0);
// Translated keyword
data.extend_from_slice(self.translated_keyword.as_bytes());
// Null separator
data.push(0);
// Text
if self.compressed {
match &self.text {
OptCompressed::Compressed(v) => {
data.extend_from_slice(&v[..]);
}
OptCompressed::Uncompressed(s) => {
let uncompressed_raw = s.as_bytes();
let mut encoder = ZlibEncoder::new(data, Compression::fast());
encoder
.write_all(uncompressed_raw)
.map_err(|_| EncodingError::from(TextEncodingError::CompressionError))?;
data = encoder
.finish()
.map_err(|_| EncodingError::from(TextEncodingError::CompressionError))?;
}
}
} else {
match &self.text {
OptCompressed::Compressed(v) => {
let uncompressed_raw = decompress_to_vec_zlib(&v[..])
.map_err(|_| EncodingError::from(TextEncodingError::CompressionError))?;
data.extend_from_slice(&uncompressed_raw[..]);
}
OptCompressed::Uncompressed(s) => {
data.extend_from_slice(s.as_bytes());
}
}
}
encoder::write_chunk(w, chunk::iTXt, &data)
}
}

View File

@@ -0,0 +1,43 @@
use std::io;
macro_rules! read_bytes_ext {
($output_type:ty) => {
impl<W: io::Read + ?Sized> ReadBytesExt<$output_type> for W {
#[inline]
fn read_be(&mut self) -> io::Result<$output_type> {
let mut bytes = [0u8; std::mem::size_of::<$output_type>()];
self.read_exact(&mut bytes)?;
Ok(<$output_type>::from_be_bytes(bytes))
}
}
};
}
macro_rules! write_bytes_ext {
($input_type:ty) => {
impl<W: io::Write + ?Sized> WriteBytesExt<$input_type> for W {
#[inline]
fn write_be(&mut self, n: $input_type) -> io::Result<()> {
self.write_all(&n.to_be_bytes())
}
}
};
}
/// Read extension to read big endian data
pub trait ReadBytesExt<T>: io::Read {
/// Read `T` from a bytes stream. Most significant byte first.
fn read_be(&mut self) -> io::Result<T>;
}
/// Write extension to write big endian data
pub trait WriteBytesExt<T>: io::Write {
/// Writes `T` to a bytes stream. Most significant byte first.
fn write_be(&mut self, _: T) -> io::Result<()>;
}
read_bytes_ext!(u8);
read_bytes_ext!(u16);
read_bytes_ext!(u32);
write_bytes_ext!(u32);

View File

@@ -0,0 +1,469 @@
//! Utility functions
use std::iter::{repeat, StepBy};
use std::ops::Range;
#[inline(always)]
pub fn unpack_bits<F>(buf: &mut [u8], channels: usize, bit_depth: u8, func: F)
where
F: Fn(u8, &mut [u8]),
{
// Return early if empty. This enables to subtract `channels` later without overflow.
if buf.len() < channels {
return;
}
let bits = buf.len() / channels * bit_depth as usize;
let extra_bits = bits % 8;
let entries = bits / 8
+ match extra_bits {
0 => 0,
_ => 1,
};
let skip = match extra_bits {
0 => 0,
n => (8 - n) / bit_depth as usize,
};
let mask = ((1u16 << bit_depth) - 1) as u8;
let i = (0..entries)
.rev() // reverse iterator
.flat_map(|idx|
// this has to be reversed too
(0..8).step_by(bit_depth.into())
.zip(repeat(idx)))
.skip(skip);
let j = (0..=buf.len() - channels).rev().step_by(channels);
for ((shift, i), j) in i.zip(j) {
let pixel = (buf[i] & (mask << shift)) >> shift;
func(pixel, &mut buf[j..(j + channels)])
}
}
pub fn expand_trns_line(buf: &mut [u8], trns: &[u8], channels: usize) {
// Return early if empty. This enables to subtract `channels` later without overflow.
if buf.len() < (channels + 1) {
return;
}
let i = (0..=buf.len() / (channels + 1) * channels - channels)
.rev()
.step_by(channels);
let j = (0..=buf.len() - (channels + 1)).rev().step_by(channels + 1);
for (i, j) in i.zip(j) {
let i_pixel = i;
let j_chunk = j;
if &buf[i_pixel..i_pixel + channels] == trns {
buf[j_chunk + channels] = 0
} else {
buf[j_chunk + channels] = 0xFF
}
for k in (0..channels).rev() {
buf[j_chunk + k] = buf[i_pixel + k];
}
}
}
pub fn expand_trns_line16(buf: &mut [u8], trns: &[u8], channels: usize) {
let c2 = 2 * channels;
// Return early if empty. This enables to subtract `channels` later without overflow.
if buf.len() < (c2 + 2) {
return;
}
let i = (0..=buf.len() / (c2 + 2) * c2 - c2).rev().step_by(c2);
let j = (0..=buf.len() - (c2 + 2)).rev().step_by(c2 + 2);
for (i, j) in i.zip(j) {
let i_pixel = i;
let j_chunk = j;
if &buf[i_pixel..i_pixel + c2] == trns {
buf[j_chunk + c2] = 0;
buf[j_chunk + c2 + 1] = 0
} else {
buf[j_chunk + c2] = 0xFF;
buf[j_chunk + c2 + 1] = 0xFF
}
for k in (0..c2).rev() {
buf[j_chunk + k] = buf[i_pixel + k];
}
}
}
/// This iterator iterates over the different passes of an image Adam7 encoded
/// PNG image
/// The pattern is:
/// 16462646
/// 77777777
/// 56565656
/// 77777777
/// 36463646
/// 77777777
/// 56565656
/// 77777777
///
#[derive(Clone)]
pub(crate) struct Adam7Iterator {
line: u32,
lines: u32,
line_width: u32,
current_pass: u8,
width: u32,
height: u32,
}
impl Adam7Iterator {
pub fn new(width: u32, height: u32) -> Adam7Iterator {
let mut this = Adam7Iterator {
line: 0,
lines: 0,
line_width: 0,
current_pass: 1,
width,
height,
};
this.init_pass();
this
}
/// Calculates the bounds of the current pass
fn init_pass(&mut self) {
let w = f64::from(self.width);
let h = f64::from(self.height);
let (line_width, lines) = match self.current_pass {
1 => (w / 8.0, h / 8.0),
2 => ((w - 4.0) / 8.0, h / 8.0),
3 => (w / 4.0, (h - 4.0) / 8.0),
4 => ((w - 2.0) / 4.0, h / 4.0),
5 => (w / 2.0, (h - 2.0) / 4.0),
6 => ((w - 1.0) / 2.0, h / 2.0),
7 => (w, (h - 1.0) / 2.0),
_ => unreachable!(),
};
self.line_width = line_width.ceil() as u32;
self.lines = lines.ceil() as u32;
self.line = 0;
}
/// The current pass#.
pub fn current_pass(&self) -> u8 {
self.current_pass
}
}
/// Iterates over the (passes, lines, widths)
impl Iterator for Adam7Iterator {
type Item = (u8, u32, u32);
fn next(&mut self) -> Option<Self::Item> {
if self.line < self.lines && self.line_width > 0 {
let this_line = self.line;
self.line += 1;
Some((self.current_pass, this_line, self.line_width))
} else if self.current_pass < 7 {
self.current_pass += 1;
self.init_pass();
self.next()
} else {
None
}
}
}
fn subbyte_pixels(scanline: &[u8], bits_pp: usize) -> impl Iterator<Item = u8> + '_ {
(0..scanline.len() * 8)
.step_by(bits_pp)
.map(move |bit_idx| {
let byte_idx = bit_idx / 8;
// sub-byte samples start in the high-order bits
let rem = 8 - bit_idx % 8 - bits_pp;
match bits_pp {
// evenly divides bytes
1 => (scanline[byte_idx] >> rem) & 1,
2 => (scanline[byte_idx] >> rem) & 3,
4 => (scanline[byte_idx] >> rem) & 15,
_ => unreachable!(),
}
})
}
/// Given pass, image width, and line number, produce an iterator of bit positions of pixels to copy
/// from the input scanline to the image buffer.
fn expand_adam7_bits(
pass: u8,
width: usize,
line_no: usize,
bits_pp: usize,
) -> StepBy<Range<usize>> {
let (line_mul, line_off, samp_mul, samp_off) = match pass {
1 => (8, 0, 8, 0),
2 => (8, 0, 8, 4),
3 => (8, 4, 4, 0),
4 => (4, 0, 4, 2),
5 => (4, 2, 2, 0),
6 => (2, 0, 2, 1),
7 => (2, 1, 1, 0),
_ => panic!("Adam7 pass out of range: {}", pass),
};
// the equivalent line number in progressive scan
let prog_line = line_mul * line_no + line_off;
// line width is rounded up to the next byte
let line_width = (width * bits_pp + 7) & !7;
let line_start = prog_line * line_width;
let start = line_start + (samp_off * bits_pp);
let stop = line_start + (width * bits_pp);
(start..stop).step_by(bits_pp * samp_mul)
}
/// Expands an Adam 7 pass
pub fn expand_pass(
img: &mut [u8],
width: u32,
scanline: &[u8],
pass: u8,
line_no: u32,
bits_pp: u8,
) {
let width = width as usize;
let line_no = line_no as usize;
let bits_pp = bits_pp as usize;
// pass is out of range but don't blow up
if pass == 0 || pass > 7 {
return;
}
let bit_indices = expand_adam7_bits(pass, width, line_no, bits_pp);
if bits_pp < 8 {
for (pos, px) in bit_indices.zip(subbyte_pixels(scanline, bits_pp)) {
let rem = 8 - pos % 8 - bits_pp;
img[pos / 8] |= px << rem as u8;
}
} else {
let bytes_pp = bits_pp / 8;
for (bitpos, px) in bit_indices.zip(scanline.chunks(bytes_pp)) {
for (offset, val) in px.iter().enumerate() {
img[bitpos / 8 + offset] = *val;
}
}
}
}
#[test]
fn test_adam7() {
/*
1646
7777
5656
7777
*/
let it = Adam7Iterator::new(4, 4);
let passes: Vec<_> = it.collect();
assert_eq!(
&*passes,
&[
(1, 0, 1),
(4, 0, 1),
(5, 0, 2),
(6, 0, 2),
(6, 1, 2),
(7, 0, 4),
(7, 1, 4)
]
);
}
#[test]
fn test_subbyte_pixels() {
let scanline = &[0b10101010, 0b10101010];
let pixels = subbyte_pixels(scanline, 1).collect::<Vec<_>>();
assert_eq!(pixels.len(), 16);
assert_eq!(pixels, [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0]);
}
#[test]
fn test_expand_adam7_bits() {
let width = 32;
let bits_pp = 1;
let expected = |offset: usize, step: usize, count: usize| {
(0..count)
.map(move |i| step * i + offset)
.collect::<Vec<_>>()
};
for line_no in 0..8 {
let start = 8 * line_no * width;
assert_eq!(
expand_adam7_bits(1, width, line_no, bits_pp).collect::<Vec<_>>(),
expected(start, 8, 4)
);
let start = start + 4;
assert_eq!(
expand_adam7_bits(2, width, line_no, bits_pp).collect::<Vec<_>>(),
expected(start, 8, 4)
);
let start = (8 * line_no + 4) as usize * width as usize;
assert_eq!(
expand_adam7_bits(3, width, line_no, bits_pp).collect::<Vec<_>>(),
expected(start, 4, 8)
);
}
for line_no in 0..16 {
let start = 4 * line_no * width + 2;
assert_eq!(
expand_adam7_bits(4, width, line_no, bits_pp).collect::<Vec<_>>(),
expected(start, 4, 8)
);
let start = (4 * line_no + 2) * width;
assert_eq!(
expand_adam7_bits(5, width, line_no, bits_pp).collect::<Vec<_>>(),
expected(start, 2, 16)
)
}
for line_no in 0..32 {
let start = 2 * line_no * width + 1;
assert_eq!(
expand_adam7_bits(6, width, line_no, bits_pp).collect::<Vec<_>>(),
expected(start, 2, 16),
"line_no: {}",
line_no
);
let start = (2 * line_no + 1) * width;
assert_eq!(
expand_adam7_bits(7, width, line_no, bits_pp).collect::<Vec<_>>(),
expected(start, 1, 32)
);
}
}
#[test]
fn test_expand_pass_subbyte() {
let mut img = [0u8; 8];
let width = 8;
let bits_pp = 1;
expand_pass(&mut img, width, &[0b10000000], 1, 0, bits_pp);
assert_eq!(img, [0b10000000u8, 0, 0, 0, 0, 0, 0, 0]);
expand_pass(&mut img, width, &[0b10000000], 2, 0, bits_pp);
assert_eq!(img, [0b10001000u8, 0, 0, 0, 0, 0, 0, 0]);
expand_pass(&mut img, width, &[0b11000000], 3, 0, bits_pp);
assert_eq!(img, [0b10001000u8, 0, 0, 0, 0b10001000, 0, 0, 0]);
expand_pass(&mut img, width, &[0b11000000], 4, 0, bits_pp);
assert_eq!(img, [0b10101010u8, 0, 0, 0, 0b10001000, 0, 0, 0]);
expand_pass(&mut img, width, &[0b11000000], 4, 1, bits_pp);
assert_eq!(img, [0b10101010u8, 0, 0, 0, 0b10101010, 0, 0, 0]);
expand_pass(&mut img, width, &[0b11110000], 5, 0, bits_pp);
assert_eq!(img, [0b10101010u8, 0, 0b10101010, 0, 0b10101010, 0, 0, 0]);
expand_pass(&mut img, width, &[0b11110000], 5, 1, bits_pp);
assert_eq!(
img,
[0b10101010u8, 0, 0b10101010, 0, 0b10101010, 0, 0b10101010, 0]
);
expand_pass(&mut img, width, &[0b11110000], 6, 0, bits_pp);
assert_eq!(
img,
[0b11111111u8, 0, 0b10101010, 0, 0b10101010, 0, 0b10101010, 0]
);
expand_pass(&mut img, width, &[0b11110000], 6, 1, bits_pp);
assert_eq!(
img,
[0b11111111u8, 0, 0b11111111, 0, 0b10101010, 0, 0b10101010, 0]
);
expand_pass(&mut img, width, &[0b11110000], 6, 2, bits_pp);
assert_eq!(
img,
[0b11111111u8, 0, 0b11111111, 0, 0b11111111, 0, 0b10101010, 0]
);
expand_pass(&mut img, width, &[0b11110000], 6, 3, bits_pp);
assert_eq!(
[0b11111111u8, 0, 0b11111111, 0, 0b11111111, 0, 0b11111111, 0],
img
);
expand_pass(&mut img, width, &[0b11111111], 7, 0, bits_pp);
assert_eq!(
[
0b11111111u8,
0b11111111,
0b11111111,
0,
0b11111111,
0,
0b11111111,
0
],
img
);
expand_pass(&mut img, width, &[0b11111111], 7, 1, bits_pp);
assert_eq!(
[
0b11111111u8,
0b11111111,
0b11111111,
0b11111111,
0b11111111,
0,
0b11111111,
0
],
img
);
expand_pass(&mut img, width, &[0b11111111], 7, 2, bits_pp);
assert_eq!(
[
0b11111111u8,
0b11111111,
0b11111111,
0b11111111,
0b11111111,
0b11111111,
0b11111111,
0
],
img
);
expand_pass(&mut img, width, &[0b11111111], 7, 3, bits_pp);
assert_eq!(
[
0b11111111u8,
0b11111111,
0b11111111,
0b11111111,
0b11111111,
0b11111111,
0b11111111,
0b11111111
],
img
);
}