🎨 Example-based texture synthesis written in Rust 🦀

Overview

🎨 texture-synthesis

Embark Embark Crates.io Docs dependency status Build Status

A light Rust API for Multiresolution Stochastic Texture Synthesis [1], a non-parametric example-based algorithm for image generation.

The repo also includes multiple code examples to get you started (along with test images), and you can find a compiled binary with a command line interface under the release tab.

Also see our talk More Like This, Please! Texture Synthesis and Remixing from a Single Example which explains this technique and the background more in-depth:

Video thumbnail

Features and examples

1. Single example generation

Imgur

Generate similar-looking images from a single example.

API - 01_single_example_synthesis

use texture_synthesis as ts;

fn main() -> Result<(), ts::Error> {
    //create a new session
    let texsynth = ts::Session::builder()
        //load a single example image
        .add_example(&"imgs/1.jpg")
        .build()?;

    //generate an image
    let generated = texsynth.run(None);

    //save the image to the disk
    generated.save("out/01.jpg")
}

CLI

cargo run --release -- --out out/01.jpg generate imgs/1.jpg

You should get the following result with the images provided in this repo:

2. Multi example generation

Imgur

We can also provide multiple example images and the algorithm will "remix" them into a new image.

API - 02_multi_example_synthesis

use texture_synthesis as ts;

fn main() -> Result<(), ts::Error> {
    // create a new session
    let texsynth = ts::Session::builder()
        // load multiple example image
        .add_examples(&[
            &"imgs/multiexample/1.jpg",
            &"imgs/multiexample/2.jpg",
            &"imgs/multiexample/3.jpg",
            &"imgs/multiexample/4.jpg",
        ])
        // we can ensure all of them come with same size
        // that is however optional, the generator doesnt care whether all images are same sizes
        // however, if you have guides or other additional maps, those have to be same size(s) as corresponding example(s)
        .resize_input(ts::Dims {
            width: 300,
            height: 300,
        })
        // randomly initialize first 10 pixels
        .random_init(10)
        .seed(211)
        .build()?;

    // generate an image
    let generated = texsynth.run(None);

    // save the image to the disk
    generated.save("out/02.jpg")?;

    //save debug information to see "remixing" borders of different examples in map_id.jpg
    //different colors represent information coming from different maps
    generated.save_debug("out/")
}

CLI

cargo run --release -- --rand-init 10 --seed 211 --in-size 300x300 -o out/02.png --debug-out-dir out generate imgs/multiexample/1.jpg imgs/multiexample/2.jpg imgs/multiexample/3.jpg imgs/multiexample/4.jpg

You should get the following result with the images provided in this repo:

3. Guided Synthesis

Imgur

We can also guide the generation by providing a transformation "FROM"-"TO" in a form of guide maps

API - 03_guided_synthesis

use texture_synthesis as ts;

fn main() -> Result<(), ts::Error> {
    let texsynth = ts::Session::builder()
        // NOTE: it is important that example(s) and their corresponding guides have same size(s)
        // you can ensure that by overwriting the input images sizes with .resize_input()
        .add_example(ts::Example::builder(&"imgs/2.jpg").with_guide(&"imgs/masks/2_example.jpg"))
        // load target "heart" shape that we would like the generated image to look like
        // now the generator will take our target guide into account during synthesis
        .load_target_guide(&"imgs/masks/2_target.jpg")
        .build()?;

    let generated = texsynth.run(None);

    // save the image to the disk
    generated.save("out/03.jpg")
}

CLI

cargo run --release -- -o out/03.png generate --target-guide imgs/masks/2_target.jpg --guides imgs/masks/2_example.jpg -- imgs/2.jpg

NOTE: Note the use of -- to delimit the path to the example imgs/2.jpg, if you don't specify --, the path to the example will be used as another guide path and there won't be any examples.

You should get the following result with the images provided in this repo:

4. Style Transfer

Imgur

Texture synthesis API supports auto-generation of example guide maps, which produces a style transfer-like effect.

API - 04_style_transfer

use texture_synthesis as ts;

fn main() -> Result<(), ts::Error> {
    let texsynth = ts::Session::builder()
        // load example which will serve as our style, note you can have more than 1!
        .add_examples(&[&"imgs/multiexample/4.jpg"])
        // load target which will be the content
        // with style transfer, we do not need to provide example guides
        // they will be auto-generated if none were provided
        .load_target_guide(&"imgs/tom.jpg")
        .guide_alpha(0.8)
        .build()?;

    // generate an image that applies 'style' to "tom.jpg"
    let generated = texsynth.run(None);

    // save the result to the disk
    generated.save("out/04.jpg")
}

CLI

cargo run --release -- --alpha 0.8 -o out/04.png transfer-style --style imgs/multiexample/4.jpg --guide imgs/tom.jpg

You should get the following result with the images provided in this repo:

5. Inpaint

Imgur

We can also fill-in missing information with inpaint. By changing the seed, we will get different version of the 'fillment'.

API - 05_inpaint

use texture_synthesis as ts;

fn main() -> Result<(), ts::Error> {
    let texsynth = ts::Session::builder()
        // let the generator know which part we would like to fill in
        // if we had more examples, they would be additional information
        // the generator could use to inpaint
        .inpaint_example(
            &"imgs/masks/3_inpaint.jpg",
            // load a "corrupted" example with missing red information we would like to fill in
            ts::Example::builder(&"imgs/3.jpg")
                // we would also like to prevent sampling from "corrupted" red areas
                // otherwise, generator will treat that those as valid areas it can copy from in the example,
                // we could also use SampleMethod::Ignore to ignore the example altogether, but we
                // would then need at least 1 other example image to actually source from
                // example.set_sample_method(ts::SampleMethod::Ignore);
                .set_sample_method(&"imgs/masks/3_inpaint.jpg"),
            // Inpaint requires that inputs and outputs be the same size, so it's a required
            // parameter that overrides both `resize_input` and `output_size`
            ts::Dims::square(400),
        )
        // Ignored
        .resize_input(ts::Dims::square(200))
        // Ignored
        .output_size(ts::Dims::square(100))
        .build()?;

    let generated = texsynth.run(None);

    //save the result to the disk
    generated.save("out/05.jpg")
}

CLI

Note that the --out-size parameter determines the size for all inputs and outputs when using inpaint!

cargo run --release -- --out-size 400 --inpaint imgs/masks/3_inpaint.jpg -o out/05.png generate imgs/3.jpg

You should get the following result with the images provided in this repo:

6. Inpaint Channel

bricks

Instead of using a separate image for our inpaint mask, we can instead obtain the information from a specific channel. In this example, the alpha channel is a circle directly in the middle of the image.

API - 06_inpaint_channel

use texture_synthesis as ts;

fn main() -> Result<(), ts::Error> {
    let texsynth = ts::Session::builder()
        // Let the generator know that it is using 
        .inpaint_example_channel(
            ts::ChannelMask::A,
            &"imgs/bricks.png",
            ts::Dims::square(400),
        )
        .build()?;

    let generated = texsynth.run(None);

    //save the result to the disk
    generated.save("out/06.jpg")
}

CLI

cargo run --release -- --inpaint-channel a -o out/06.png generate imgs/bricks.jpg

You should get the following result with the images provided in this repo:

7. Tiling texture

We can make the generated image tile (meaning it will not have seams if you put multiple images together side-by-side). By invoking inpaint mode together with tiling, we can make an existing image tile.

API - 07_tiling_texture

use texture_synthesis as ts;

fn main() -> Result<(), ts::Error> {
    // Let's start layering some of the "verbs" of texture synthesis
    // if we just run tiling_mode(true) we will generate a completely new image from scratch (try it!)
    // but what if we want to tile an existing image?
    // we can use inpaint!

    let texsynth = ts::Session::builder()
        // load a mask that specifies borders of the image we can modify to make it tiling
        .inpaint_example(
            &"imgs/masks/1_tile.jpg",
            ts::Example::new(&"imgs/1.jpg"),
            ts::Dims::square(400),
        )
        //turn on tiling mode!
        .tiling_mode(true)
        .build()?;

    let generated = texsynth.run(None);

    generated.save("out/07.jpg")
}

CLI

cargo run --release -- --inpaint imgs/masks/1_tile.jpg --out-size 400 --tiling -o out/07.bmp generate imgs/1.jpg

You should get the following result with the images provided in this repo:

8. Repeat texture synthesis transform on a new image

We can re-apply the coordinate transformation performed by texture synthesis onto a new image.

API - 08_repeat_transform

use texture_synthesis as ts;

fn main() -> Result<(), ts::Error> {
    // create a new session
    let texsynth = ts::Session::builder()
        //load a single example image
        .add_example(&"imgs/1.jpg")
        .build()?;

    // generate an image
    let generated = texsynth.run(None);

    // now we can apply the same transformation of the generated image
    // onto a new image (which can be used to ensure 1-1 mapping between multiple images)
    // NOTE: it is important to provide same number of input images as the 
    // otherwise, there will be coordinates mismatch
    let repeat_transform_img = generated
        .get_coordinate_transform()
        .apply(&["imgs/1_bw.jpg"])?;

    // save the image to the disk
    // 08 and 08_repeated images should match perfectly
    repeat_transform_img.save("out/08_repeated.jpg").unwrap();
    generated.save("out/08.jpg")
}

CLI

  1. First, we need to create a transform that can be reused

The notable bit here is the --save-transform out/multi.xform which creates the file that can be used to generate new outputs with.

cargo run --release -- --rand-init 10 --seed 211 --in-size 300x300 -o out/02.png generate --save-transform out/multi.xform imgs/multiexample/1.jpg imgs/multiexample/2.jpg imgs/multiexample/3.jpg imgs/multiexample/4.jpg

  1. Next, we use the repeat subcommand to repeat transform with different inputs

The important bits here are the use of the repeat subcommand instead of generate, and --transform out/multi.xform which tells what transform to apply to the inputs. The only restriction is that the number of images you specify must match the original number of examples exactly. If the input images have different dimensions than the example images, they will be automatically resized for you.

cargo run --release -- -o out/02-repeated.png repeat --transform out/multi.xform imgs/multiexample/1.jpg imgs/multiexample/2.jpg imgs/multiexample/4.jpg imgs/multiexample/3.jpg

Also note that the normal parameters that are used with generate don't apply to the repeat subcommand and will be ignored.

9. Sample masks

Sample masks allow you to specify how an example image is sampled during generation.

API - 09_sample_masks

use texture_synthesis as ts;

fn main() -> Result<(), ts::Error> {
    let session = ts::Session::builder()
        .add_example(
            ts::Example::builder(&"imgs/4.png").set_sample_method(ts::SampleMethod::Ignore),
        )
        .add_example(ts::Example::builder(&"imgs/5.png").set_sample_method(ts::SampleMethod::All))
        .seed(211)
        .output_size(ts::Dims::square(200))
        .build()?;

    // generate an image
    let generated = session.run(None);

    // save the image to the disk
    generated.save("out/09.png")
}

CLI

cargo run --release -- --seed 211 --out-size 200 --sample-masks IGNORE ALL --out 09_sample_masks.png generate imgs/4.png imgs/5.png

You should get the following result with the images provided in this repo:

10. Combining texture synthesis 'verbs'

We can also combine multiple modes together. For example, multi-example guided synthesis:

Or chaining multiple stages of generation together:

For more use cases and examples, please refer to the presentation "More Like This, Please! Texture Synthesis and Remixing from a Single Example"

Additional CLI functionality

Some functionality is only exposed through the CLI and not built into the library.

flip-and-rotate

This subcommand takes each example and performs flip and rotation transformations to it to generate additional example inputs for generation. This subcommand doesn't support target or example guides.

Example: cargo run --release -- -o out/output.png flip-and-rotate imgs/1.jpg

Command line binary

  • Download the binary for your OS.
  • Or Install it from source.
    • Install Rust - The minimum required version is 1.37.0
    • Clone this repo
    • In a terminal cd to the directory you cloned this repository into
    • Run cargo install --path=cli
    • Or if you wish to see the texture as it is being synthesized cargo install --path=cli --features="progress"
  • Open a terminal
  • Navigate to the directory where you downloaded the binary, if you didn't just cargo install it
  • Run texture_synthesis --help to get a list of all of the options and commands you can run
  • Refer to the examples section in this readme for examples of running the binary

Notes

  • By default, generating output will use all of your logical cores
  • When using multiple threads for generation, the output image is not guaranteed to be deterministic with the same inputs. To have 100% determinism, you must use a thread count of one, which can by done via
    • CLI - texture-synthesis --threads 1
    • API - SessionBuilder::max_thread_count(1)

Limitations

  • Struggles with complex semantics beyond pixel color (unless you guide it)
  • Not great with regular textures (seams can become obvious)
  • Cannot infer new information from existing information (only operates on what’s already there)
  • Designed for single exemplars or very small datasets (unlike Deep Learning based approaches)

Links/references

[1] [Opara & Stachowiak] "More Like This, Please! Texture Synthesis and Remixing from a Single Example"

[2] [Harrison] Image Texture Tools

[3] [Ashikhmin] Synthesizing Natural Textures

[4] [Efros & Leung] Texture Synthesis by Non-parametric Sampling

[5] [Wey & Levoy] Fast Texture Synthesis using Tree-structured Vector Quantization

[6] [De Bonet] Multiresolution Sampling Procedure for Analysis and Synthesis of Texture Images

[7] All the test images in this repo are from Unsplash

Contributing

Contributor Covenant

We welcome community contributions to this project.

Please read our Contributor Guide for more information on how to get started.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Comments
  • Replace Travis with GitHub action for linting & tests

    Replace Travis with GitHub action for linting & tests

    This is a first experiment to use @svartalf 's new Rust GitHub actions for verifying that that are no differences with rustfmt and no clippy warnings instead of Travis CI.

    My hope is that it would both be less setup, better performance (Travis is capped to 5 concurrent builds per org) and better user experience with being able to see the errors & warnings of a failing check directly in GitHub.

    Have run into some issues though, primarily that occasionally the clippy build simple doesn't want to start, which is a major blocker. Issue with GitHub Actions in general, or configuration error?

    Here is an example of such a build: https://github.com/EmbarkStudios/texture-synthesis/runs/238881653. Just sits on "Starting your workflow run..."

    opened by repi 23
  • Modify Threading Scheme to Improve Performance with Large Numbers of Threads

    Modify Threading Scheme to Improve Performance with Large Numbers of Threads

    Checklist

    • [x] I have read the Contributor Guide
    • [x] I have read and agree to the Code of Conduct
    • [x] I have added a description of my changes and why I'd like them included in the section below

    Description of Changes

    Disclaimer: This pr is a big proposed change and is not fully ready yet (inpaint and tiling are currently disabled but should be easily fixable). The code isn't the cleanest (feel free to let me know if it's too hard to read) and could be optimized further but I wanted to put it up as a proof of concept see what you think, make sure my big assumptions about how the code works were not very wrong, and see how this runs on other people's machines before I go further.

    Motivation: After seeing that @h3r2tic had virtually no improvements from my previous pr on their 32 core cpu I rented a larger cloud machine (64 logical cores, Intel Xeon) to test out how my previous changes behaved with many cores. It turns out I didn't see any improvement either. I then tried with lower numbers of threads (48, 32, 24, 16, 8, etc) and noticed that the run time actually seemed to improve as I decreased the number of threads (after some fiddling I found 22 threads was the best number for my instance) and barely went up until < 16 threads were in use. I figured this was a contention problem, so I added some additional logging to measure the time it took to acquire various locks and found (at least on that machine) the time it took to acquire a write lock on the rtree went up proportionally with the number of threads. Also as a side effect I believe it dwarfed any improvement from my previous change to the point where it was barely noticeable.

    Proposed Improvements: I've made several changes which I have seen reduce contention on my test machines.

    1. Replace the RTree with a grid of Rtrees Advantages: Most of the time (at least in the later stages of synthesis) two threads may be working on totally different areas of the image and will not need to access the same sets of pixels and therefore should not have to wait to read / write. To address this after the first few stages of synthesis I replace the single rtree with a grid of rtrees. Writes only lock the rtree in the grid cell which the pixel is in and reads just read from a cell and its neighbors. Overall this has seemed to improve contention considerably in my tests. (Possibly a multi threaded rtree is better than this but I couldn't find one and it seemed more difficult / error prone to build for likely little upside) Drawbacks: The actual read itself is a little slower, this is balanced out by the fact that far more threads can read and write at the same time but it should be optimized a little more for machines with fewer cores Technically we can miss some neighbors if a pixels k nearest neighbors are outside of the current cell + 8 adjacent cells, however because we only use a grid after the first few steps there should be enough pixels where this is never a problem. I have not seen quality degrade (see below) Places for improvement: The read can definitely be optimized more (especially important for cpus with small numbers of cores) The number of grid cells is currently just a random constant but really should vary with the number of threads / size of the image / k

    2. Don’t write to resolved queue until after stage is complete (don't updated it when a pixel is placed) After I switched to the rtree grid I noticed that although writes were really fast the resolved queue seemed to be suddenly very hard to acquire a lock on. After looking at the code more I decided that each thread could have its own resolved queue and I could combine them at the end of the stage. I don't think this actually changes the behavior of the code because the only time we read from the resolved queue is to redo pixels done in the previous stages so we don't care what pixels other threads have updated in the current stage. Advantages: The resolved queue is no longer backed up leading to a speed up Disadvantages: None that I see

    3. Remove the Update Queue entirely I removed the update queue to simplify things a little because the writes were now fast enough where I could write every time instead of needing to batch. Advantages: Code is simpler, very small speed up Disadvantages: I can't think of too much but possibly this could be bad not to have in the first few stages before we switch to the grid of rtrees Could also be worse for smaller grid sizes

    Results: I’ve tested on only two large machines so far: Google Cloud N1: Threads: 64 Cpu: Intel(R) Xeon(R) CPU @ 2.30GHz:

    Command: time cargo run --release -- --threads 64 --out-size 2048 --out out/01.jpg generate imgs/1.jpg
    BEFORE: ~1m16s
    AFTER: ~28.8s
    
    Command: time cargo run --release -- --threads 32 --out-size 2048 --out out/01.jpg generate imgs/1.jpg
    BEFORE: ~1m12.8s
    AFTER: ~35.7s
    
    Command: time cargo run --release -- --threads 64 --out-size 1024 --out out/01.jpg generate imgs/1.jpg
    BEFORE: ~17.5s
    AFTER: ~7.5s
    
    Command time cargo run --release -- --threads 64 --out-size 512 --out out/01.jpg generate imgs/1.jpg
    BEFORE: ~4.1s
    AFTER: ~1.9s
    

    AWS Z1d: Threads: 48 Cpu: Intel® Xeon® Scalable (Clock speed is a little unclear but Amazon claims up to 4ghz)

    Command: time cargo run --release -- --threads 48  --out-size 2048 --out out/01.jpg generate imgs/1.jpg
    BEFORE: ~41.5 seconds
    AFTER: ~22.5 seconds
    
    Command: time cargo run --release --  --threads 24  --out-size 2048 --out out/01.jpg generate imgs/1.jpg
    BEFORE: ~44.5 seconds
    AFTER: ~29 seconds
    
    Command: time cargo run --release --  --threads 48 --out-size 1024 --out out/01.jpg generate imgs/1.jpg
    BEFORE: ~9.7 seconds
    AFTER: ~5.7 seconds 
    
    time cargo run --release --  --threads 48 --out-size 512 --out out/01.jpg generate imgs/1.jpg
    BEFORE: ~2.36 seconds
    AFTER: ~1.47 seconds
    

    I’m also curious to see what others get on cpus with large numbers of cores. @h3r2tic especially want to know how this runs on your 32 core cpu if you have the time.

    Image Quality: Because this change affects the threads and is fairly major I wanted to make sure that image quality did not change. It looks like tests still pass for everything that I didn’t break so the image hashes must be fairly close. I also manually inspected some large images but did not see any major changes. However it is important that the threads share one rtree for the first few stages before there are enough pixels to fill out a decent portion of the grid cells.

    Related Issues

    I don’t believe this is related to any mentioned issue

    opened by Mr4k 12
  • [Optimization] Read example pixels only when necessary

    [Optimization] Read example pixels only when necessary

    Checklist

    • [x] I have read the Contributor Guide
    • [x] I have read and agree to the Code of Conduct
    • [x] I have added a description of my changes and why I'd like them included in the section below

    Description of Changes

    I'm not sure if this is the kind of contribution you are looking for but I did a little profiling (using instruments, screenshot below) and found out that (on my computer at least) a large amount of time was being taken up by the function k_neighs_to_color_pattern when creating the candidate patterns. It appears that looking up the pixels in the example images is somewhat costly and it is done in an inner loop which ends up contributing significantly to runtime.

    instruments-profile

    To try to cut down on this cost I moved pixel lookups for the candidate's neighbors into the better_match function. I only read each pixel right before it needs to be used in the cost function. This means because are you already stopping a lot of the cost computations early (when the current candidate cost exceeds the smallest candidate cost so far) fewer pixel lookups are performed. This does not change the algorithm at all.

    This results in around a 14% - 45% speed up (according to your benchmark test suite on my computer) depending on the example image(s) used and size of output texture. The average speed up seems to be more in the range of 14 - 25% (very unscientifically computed). I assume there are pathological cases where no performance gain could occur but I think they would be rare.

    About my computer: MacBook Pro (Retina, 13-inch, Early 2015) Processor: 2.9 GHz Intel Core i5 (4 logical cores) Memory: 8 GB 1867 MHz DDR3

    Edit: Additionally tested with a Macbook Pro 2018 2.6 Ghz Intel Core i7 (12 logical cores) 32 GB 2400 MHz DDR4

    I also tried to change the code minimally but there was some refactoring.

    Disclaimer: I have not tested this on a wide variety of devices or high end cpus

    Related Issues

    I don't think this is related to any open issues.

    enhancement 
    opened by Mr4k 10
  • Improve performance by about 50%

    Improve performance by about 50%

    I ran cargo flamegraph, and it turns out a huge portion of the runtime was spent in find_match and find_better_match. It's a very, very hot loop.

    Almost all of the work done in the inner loop (find_better_match) is a function of two u8s... it can be memoized/precomputed! Also, the alpha masks can be rendered into these precomputed cost functions, avoiding the need to do any alpha computations in the loop.

    I looked hard for a way to improve dist_gaussian within find_better_match, but the best I could do was precompute outside the find_best_match loop. It still helped a lot.

    The performance improvement ranges from 40%-60% in the examples. I ran them all before/after to get performance numbers:

    Baseline (f93022):

    $ time cargo run --release --example 01_single_example_synthesis
    real	0m35.907s
    user	1m42.443s
    sys	0m1.703s
    
    $ time cargo run --release --example 02_multi_example_synthesis
    real	0m35.078s
    user	1m40.583s
    sys	0m1.439s
    
    $ time cargo run --release --example 03_guided_synthesis
    real	0m54.007s
    user	2m52.701s
    sys	0m1.350s
    
    $ time cargo run --release --example 04_style_transfer
    real	1m1.137s
    user	3m2.301s
    sys	0m1.432s
    
    $ time cargo run --release --example 05_inpaint
    real	0m7.112s
    user	0m17.274s
    sys	0m0.606s
    
    $ time cargo run --release --example 06_tiling_texture
    real	0m17.468s
    user	0m44.012s
    sys	0m0.844s
    

    Patched (6a97e0):

    $ time cargo run --release --example 01_single_example_synthesis
    real	0m15.504s
    user	0m50.723s
    sys	0m0.714s
    
    $ time cargo run --release --example 02_multi_example_synthesis
    real	0m22.895s
    user	0m55.519s
    sys	0m1.365s
    
    $ time cargo run --release --example 03_guided_synthesis
    real	0m36.120s
    user	1m46.536s
    sys	0m1.203s
    
    $ time cargo run --release --example 04_style_transfer
    real	0m33.324s
    user	1m44.423s
    sys	0m0.888s
    
    $ time cargo run --release --example 05_inpaint
    real	0m2.747s
    user	0m7.814s
    sys	0m0.139s
    
    $ time cargo run --release --example 06_tiling_texture
    real	0m10.338s
    user	0m21.917s
    sys	0m0.756s
    

    I didn't see any visible artifacts in the output. This is a lossless optimization!

    opened by austinjones 10
  • could not compile `texture-synthesis`

    could not compile `texture-synthesis`

    could not compile texture-synthesis

    Steps to reproduce the behavior:

    1. Go to clone repo \texture-synthesis
    2. Run in Terminal cargo install --path=cli
    3. See error

    stable-x86_64-pc-windows-msvc (default) rustc 1.56.1 (59eed8a2a 2021-11-01)

    Caused by:
      build failed
    PS C:\texture_synthesis> cargo update
        Updating git repository `https://github.com/EmbarkStudios/img_hash.git`
        Updating crates.io index
        Updating bstr v0.2.16 -> v0.2.17
        Updating bumpalo v3.7.0 -> v3.8.0
        Updating cc v1.0.69 -> v1.0.72
        Updating clang-sys v1.2.1 -> v1.3.0
        Updating cmake v0.1.45 -> v0.1.46
        Updating console v0.14.1 -> v0.15.0
        Updating crc32fast v1.2.1 -> v1.2.2
          Adding cty v0.2.2
        Updating encoding_rs v0.8.28 -> v0.8.29
        Updating flate2 v1.0.21 -> v1.0.22
        Updating half v1.7.1 -> v1.8.2
        Updating js-sys v0.3.53 -> v0.3.55
        Updating libc v0.2.101 -> v0.2.108
        Updating libloading v0.7.0 -> v0.7.2
        Removing maybe-uninit v2.0.0
          Adding once_cell v1.8.0
        Updating pdqselect v0.1.0 -> v0.1.1
        Updating pkg-config v0.3.19 -> v0.3.22
        Updating proc-macro2 v1.0.29 -> v1.0.32
        Updating quote v1.0.9 -> v1.0.10
        Removing raw-window-handle v0.3.3
          Adding raw-window-handle v0.3.4
          Adding raw-window-handle v0.4.2
        Updating serde_json v1.0.67 -> v1.0.72
        Updating structopt v0.3.23 -> v0.3.25
        Updating structopt-derive v0.4.16 -> v0.4.18
        Updating syn v1.0.75 -> v1.0.82
        Updating unicode-width v0.1.8 -> v0.1.9
        Updating wasm-bindgen v0.2.76 -> v0.2.78
        Updating wasm-bindgen-backend v0.2.76 -> v0.2.78
        Updating wasm-bindgen-macro v0.2.76 -> v0.2.78
        Updating wasm-bindgen-macro-support v0.2.76 -> v0.2.78
        Updating wasm-bindgen-shared v0.2.76 -> v0.2.78
        Updating web-sys v0.3.53 -> v0.3.55
        Updating x11-dl v2.18.5 -> v2.19.1
    PS C:\texture_synthesis> cargo install --path=cli
      Installing texture-synthesis-cli v0.8.2 (C:\texture_synthesis\cli)
        Updating git repository `https://github.com/EmbarkStudios/img_hash.git`
        Updating crates.io index
    warning: Patch `img_hash v2.1.0 (https://github.com/EmbarkStudios/img_hash.git?rev=c40da78#c40da789)` was not used in the crate graph.
    Check that the patched package version and available features are compatible
    with the dependency requirements. If the patch has a different version from
    what is locked in the Cargo.lock file, run `cargo update` to use the new
    version. This may also occur with an optional dependency that is not enabled.
       Compiling texture-synthesis v0.8.1 (C:\texture_synthesis\lib)
    error: failed to compile `texture-synthesis-cli v0.8.2 (C:\texture_synthesis\cli)`, intermediate artifacts can be found at `C:\texture_synthesis\target`
    
    Caused by:
      could not compile `texture-synthesis`
    
    Caused by:
      process didn't exit successfully: `rustc --crate-name texture_synthesis --edition=2018 lib\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no -C metadata=49f1ddcd6f2ccc14 -C extra-filename=-49f1ddcd6f2ccc14 --out-dir C:
    \texture_synthesis\target\release\deps -L dependency=C:\texture_synthesis\target\release\deps --extern crossbeam_utils=C:\texture_synthesis\target\release\deps\libcrossbeam_utils-7b399fb1046e66b6.rmeta --extern image=C:\texture_synthesis\target\release\deps\libimage-5e450dcef3d57d47.rmeta --extern num_cpus=C:\text
    ure_synthesis\target\release\deps\libnum_cpus-d3a6b4fc392a88ff.rmeta --extern rand=C:\texture_synthesis\target\release\deps\librand-1b42e77a27ea407f.rmeta --extern rand_pcg=C:\texture_synthesis\target\release\deps\librand_pcg-145f9981a0711c53.rmeta --extern rstar=C:\texture_synthesis\target\release\deps\librstar-e
    66dad66812cd298.rmeta` (exit code: 0xc000001d, STATUS_ILLEGAL_INSTRUCTION)
    PS C:\texture_synthesis>
    
    

    Please help, what am I doing wrong?

    bug 
    opened by Xsafo 8
  • Automatically deriving mask for inpainting from alpha channel

    Automatically deriving mask for inpainting from alpha channel

    Currently inpainting requires specifying two congruent images. It may be more convenient to just specify one image with erased regions (alpha channel).

    Current UX:

    $ convert q1.png -alpha extract q1_mask.png
    $ texture-synthesis --out-size=171x248 --in-size=171x248 --out=w.png --inpaint=q1_mask.png generate q1.png
    

    Desired UX:

    $ texture-synthesis --out=w.png alpha-inpaint q1.png
    
    enhancement help wanted Hacktoberfest 
    opened by vi 8
  • Add sample mask example

    Add sample mask example

    Though #85 is a bug in structopt, we should still have at least one example of using sample masks, so that if users copy it and replace eg, the input image paths, they won't get this confusing error message.

    bug documentation enhancement 
    opened by Jake-Shadle 7
  • In Painting of Multiple of Maps based on History Maps as per presentation

    In Painting of Multiple of Maps based on History Maps as per presentation

    I'd like to be able to do the texture synthesis on multiple maps the same way, as described on the original talk from Anastasia.

    She mentioned there are some dependencies on HDR images and separate executable but I'd like to request that those are also made available.

    Thank you so much for your contributions!

    enhancement help wanted Hacktoberfest 
    opened by lkruel 7
  • Another CLI UI strangeness:

    Another CLI UI strangeness:

    Another CLI UI strangeness:

            --out-fmt <out-fmt>                       
                The format to save the generated image as.
                
                NOTE: this will only apply when stdout is specified via `-o -`, otherwise the image format is determined by
                the file extension of the path provided to `-o` [default: png]
    

    Explicit command-line option should override any sort of auto-detection.

    Originally posted by @vi in https://github.com/EmbarkStudios/texture-synthesis/issues/20#issuecomment-530419762

    bug 
    opened by Jake-Shadle 7
  • thread '<unnamed>' panicked at 'cannot access stderr during shutdown', src\libcore\option.rs:1188:5

    thread '' panicked at 'cannot access stderr during shutdown', src\libcore\option.rs:1188:5

    Using texture-synthesis-0.8.0-x86_64-pc-windows-msvc.zip and the provided example images:

    C:\texture-synthesis-0.8.0-x86_64-pc-windows-msvc>texture-synthesis.exe --inpaint 1_tile.jpg --out-size 400 --tiling -o out.bmp generate 1.jpg
    [00:00:01] ###########################------------- 66%
     stage   6 #############--------------------------- 32%
    thread '<unnamed>' panicked at 'cannot access stderr during shutdown', src\libcore\option.rs:1188:5
    

    Sometimes it works, sometimes it doesn't.

    bug 
    opened by petsuter 6
  • error: failed to fill whole buffer

    error: failed to fill whole buffer

    Hello first of all thank you for 0.8.0 and the CLI documentation with the examples. This is awesome. The speed is noticeable.

    I have this error when trying to feed a 3k image as an example (3180x 3180) : error: failed to fill whole buffer

    The machine is a 8go mac laptop. Os 10.13.6 So is it normal ? and is there a rule to know the image size limit with an amount of RAM available.

    Thank you.

    bug 
    opened by materialjan 5
  • Looking for maintainers

    Looking for maintainers

    We at Embark are not actively using or developing these crates and would be open to transferring them to a maintainer or maintainers that would be more active. Please respond in this issue if you are interested.

    help wanted 
    opened by Jake-Shadle 1
  • Inpaint using other image as example runtime error

    Inpaint using other image as example runtime error

    I'm trying to use the library directly without the CLI interface. I'm trying the impainting example as explained inside the library in a comment over inpaint_example() method, below the cited comment:

        /// Inpaints an example. Due to how inpainting works, a size must also be
        /// provided, as all examples, as well as the inpaint mask, must be the same
        /// size as each other, as well as the final output image. Using
        /// `resize_input` or `output_size` is ignored if this method is called.
        ///
        /// To prevent sampling from the example, you can specify
        /// `SamplingMethod::Ignore` with `Example::set_sample_method`.
        ///
        /// See [`examples/05_inpaint`](https://github.com/EmbarkStudios/texture-synthesis/tree/main/lib/examples/05_inpaint.rs)
        ///
        /// # Examples
        ///
        /// ```no_run
        /// let tex_synth = texture_synthesis::Session::builder()
        ///     .add_examples(&[&"imgs/1.jpg", &"imgs/3.jpg"])
        ///     .inpaint_example(
        ///         &"masks/inpaint.jpg",
        ///         // This will prevent sampling from the imgs/2.jpg, note that
        ///         // we *MUST* provide at least one example to source from!
        ///         texture_synthesis::Example::builder(&"imgs/2.jpg")
        ///             .set_sample_method(texture_synthesis::SampleMethod::Ignore),
        ///         texture_synthesis::Dims::square(400)
        ///     )
        ///     .build().expect("failed to build session");
        /// ```
    

    using exactly the script in this comment present in the file session.rs:122 I got this error:

    thread 'main' panicked at 'index out of bounds: the len is 2 but the index is 2', lib/src/ms.rs:696:18
    stack backtrace:
       0: rust_begin_unwind
                 at /rustc/51126be1b260216b41143469086e6e6ee567647e/library/std/src/panicking.rs:577:5
       1: core::panicking::panic_fmt
                 at /rustc/51126be1b260216b41143469086e6e6ee567647e/library/core/src/panicking.rs:135:14
       2: core::panicking::panic_bounds_check
                 at /rustc/51126be1b260216b41143469086e6e6ee567647e/library/core/src/panicking.rs:77:5
       3: texture_synthesis::ms::Generator::next_pyramid_level
                 at ./lib/src/ms.rs:696:18
       4: texture_synthesis::ms::Generator::resolve
                 at ./lib/src/ms.rs:797:17
       5: texture_synthesis::session::Session::run
                 at ./lib/src/session.rs:55:9
    

    This behavior appears only if I pass texture_synthesis::SampleMethod::Ignore as parameter for set_sample_method() of the example inside the inpaint_example method. Instead, texture_synthesis::SampleMethod::All and an image as argument work smoothly.

    I'm using cargo 1.60.0-nightly (95bb3c92b 2022-01-18)

    Anyone know how to solve this bug?

    bug 
    opened by MassimilianoBiancucci 0
  • Infer new information

    Infer new information

    First of all, I wanted to congratulate this project because it is really incredible.

    I am doing a similar project, to generate images of huge textures with one or several input images, but I have the problem that I would need to infer new information (as a neural network but with your quality). I would like to know if you know of any Git or paper that addresses this problem.

    King regards

    enhancement 
    opened by pablovicentem 0
  • inpaint using other image as example

    inpaint using other image as example

    My goal is to paint inside an image using an other image as example.

    For example I would like to paint in the first image where the 'A' is transparent (here rendered with checkered pattern) using the second image as example. The result would be something like the third image but with pixels generated so the border matches.

    I tried with commands like

    texture-synthesis --inpaint-channel a --sample-masks IGNORE ALL \
    -o output.png generate A-alpha.png dark-smoke.png
    # expecting it to paint inside A-alpha using examples from dark-smoke
    

    but it does not work and gives me

    as if it did use the alpha channel of A-alpha but choose to paint inside dark-smoke instead of painting inside A-alpha.

    opened by Hugo-Trentesaux 0
  • 16 Bit Image Support?

    16 Bit Image Support?

    I'm not sure if I'm just missing it somewhere, but I'm using a 16 bit grayscale (height map) png as the input, but still getting an 8 bit rgb output. Am I just missing something, or is it not capable of this yet? I thought I saw some terrain tests, and I was assuming those were 16 bit

    opened by arvinmoses 11
Releases(0.8.2)
Owner
Embark
The future belongs to the curious
Embark
Rust based breadth first search maze image solver

maze_solver Rust based breadth first search maze image solver Works on black and white images with provided start and end points. Usage: maze_solver

null 0 Jan 31, 2022
An SVG toolkit based on resvg

rusty-svg An SVG toolkit based on resvg This module is compiled to WASM and currently only supports Node.js Comparing with the backend ReSVG, this mod

Zimon Dai 9 Mar 21, 2022
Signed distance field font and image command line tool based on OpenCL.

SDFTool Signed distance field font and image command line tool based on OpenCL. Build Windows Run cargo build --release in Visual Studio developer x64

弦语蝶梦 7 Oct 16, 2022
Motion detection & video recording software based on OpenCV, built for research on Bumblebees

BombusCV Motion detection & video recording software based on OpenCV, built for research on Bumblebees (hence the name). Index Use case Examples Insta

Marco Radocchia 7 Dec 27, 2022
A simple steganography library written in rust

steganography A stable steganography library written in rust Crates.io Usage Add the following to the Cargo.toml in your project: [dependencies] stega

Teodor Voinea 79 Dec 9, 2022
tai (Terminal Ascii Image) tool to convert images to ascii written in Rust

TAI Terminal Ascii Image A tool to convert images to ascii art written in Rust ?? Notes This tool is still in development stage. Contributions All Con

Mustafa Salih 258 Dec 5, 2022
A Telegram Bot written in Rust to Track new Github releases

Release Tracker This is used to track Releases posted on GitHub Releases and Post it on a Telegram Channel/Group. Setup Export env variables or just f

Kartikeya Hegde 20 Jul 7, 2022
A simple image average color extractor written in 🦀 Rust

A simple image average color extractor written in ?? Rust

Victor Aremu 3 Sep 23, 2021
Antialiased 2D vector drawing library written in Rust

femtovg Join the femtovg Discord channel Work in progress! Antialiased 2D vector drawing library written in Rust.

Tomasz Sterna 0 Aug 24, 2021
A Rust library for calculating perceptual hash values of images

img_hash Now builds on stable Rust! (But needs nightly to bench.) A library for getting perceptual hash values of images. Thanks to Dr. Neal Krawetz f

Austin Bonander 264 Dec 9, 2022
Encoding and decoding images in Rust

Image Maintainers: @HeroicKatora, @fintelia How to contribute An Image Processing Library This crate provides basic image processing functions and met

image-rs 3.5k Jan 9, 2023
Rust bindings for OpenCV 3 & 4

Rust OpenCV bindings Experimental Rust bindings for OpenCV 3 and 4. The API is usable, but unstable and not very battle-tested; use at your own risk.

null 1.2k Dec 30, 2022
Zero dependency images (of chaos) in Rust

bifurcate-rs Zero dependency images (of chaos) in Rust To run: time cargo run --release > img.pgm To convert from PGM to PNG using Image Magick: conve

Stephen Merity 32 Nov 17, 2021
Face detection library for the Rust programming language

Rustface SeetaFace detection library for the Rust programming language Example of demo program output SEETAFACE C++ – Github repository for the origin

Andrei Tomashpolskiy 323 Dec 27, 2022
Rust CV mono-repo

Rust CV Rust CV is a project to implement computer vision algorithms, abstractions, and systems in Rust. #[no_std] is supported where possible. Docume

Rust Computer Vision 429 Dec 29, 2022
Visual Odometry in Rust (vors)

Visual Odometry in Rust (vors) This repository provides both a library ("crate" as we say in Rust) named visual-odometry-rs, (shortened vors) and a bi

Matthieu Pizenberg 42 Dec 29, 2022
A Simple-to-use, cross-platform Rust Webcam Capture Library

Cross Platform Rust Library for powerful Webcam Capture and Virtual Webcams

null 246 Jan 8, 2023
Classical Rainbow Triangle using Rust and Vulkan via vulkano bindings

Vulkano Rainbow Triangle Classical Rainbow Triangle using Rust and Vulkan via vulkano bindings. Based on the vulkano triangle example Quick Start $ ca

Tsoding 14 Dec 30, 2022
A simple command-line utility (and Rust crate!) for converting from a conventional image file (e.g. a PNG file) into a pixel-art version constructed with emoji

EmojiPix This is a simple command-line utility (and Rust crate!) for converting from a conventional image file (e.g. a PNG file) into a pixel-art vers

Michael Milton 22 Dec 6, 2022