A priority queue for Rust with efficient change function.

Overview

PriorityQueue

https://travis-ci.org/garro95/priority-queue.svg?branch=master

This crate implements a Priority Queue with a function to change the priority of an object. Priority and items are stored in an IndexMap and the queue is implemented as a Heap of indexes.

Please read the API documentation here

Usage

To use this crate, simply add the following string to your Cargo.toml:

priority-queue = "1.0.0"

Version numbers follow the semver convention.

Then use the data structure inside your Rust source code as in the following Example.

Remember that, if you need serde support, you should compile using --features serde.

Example

extern crate priority_queue; // not necessary in Rust edition 2018

use priority_queue::PriorityQueue;

fn main() {
    let mut pq = PriorityQueue::new();

    assert!(pq.is_empty());
    pq.push("Apples", 5);
    pq.push("Bananas", 8);
    pq.push("Strawberries", 23);

    assert_eq!(pq.peek(), Some((&"Strawberries", &23)));

    for (item, _) in pq.into_sorted_iter() {
        println!("{}", item);
    }
}

Note: in recent versions of Rust (edition 2018) the extern crate priority_queue is not necessary anymore!

Speeding up

You can use custom BuildHasher for the underlying IndexMap and therefore achieve better performance. For example you can create the queue with the speedy FxHash hasher:

use hashbrown::hash_map::DefaultHashBuilder;

let mut pq = PriorityQueue::<_, _, DefaultHashBuilder>::with_default_hasher();

Attention: FxHash does not offer any protection for dos attacks. This means that some pathological inputs can make the operations on the hashmap O(n^2). Use the standard hasher if you cannot control the inputs.

Benchmarks

Some benchmarks have been run to compare the performances of this priority queue to the standard BinaryHeap, also using the FxHash hasher. On a Ryzen 9 3900X, the benchmarks produced the following results:

test benchmarks::priority_change_on_large_queue     ... bench:          20 ns/iter (+/- 0)
test benchmarks::priority_change_on_large_queue_fx  ... bench:           7 ns/iter (+/- 0)
test benchmarks::priority_change_on_large_queue_std ... bench:     255,098 ns/iter (+/- 45,542)
test benchmarks::priority_change_on_small_queue     ... bench:          19 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_queue_fx  ... bench:           7 ns/iter (+/- 0)
test benchmarks::priority_change_on_small_queue_std ... bench:       1,741 ns/iter (+/- 24)
test benchmarks::push_and_pop                       ... bench:          37 ns/iter (+/- 0)
test benchmarks::push_and_pop_fx                    ... bench:          25 ns/iter (+/- 0)
test benchmarks::push_and_pop_on_large_queue        ... bench:         185 ns/iter (+/- 3)
test benchmarks::push_and_pop_on_large_queue_fx     ... bench:         118 ns/iter (+/- 1)
test benchmarks::push_and_pop_on_large_queue_std    ... bench:          33 ns/iter (+/- 6)
test benchmarks::push_and_pop_std                   ... bench:           4 ns/iter (+/- 0)

The priority change on the standard queue was obtained with the following:

pq = pq.drain().map(|Entry(i, p)| {
    if i == 50_000 {
        Entry(i, p/2)
    } else {
        Entry(i, p)
    }
}).collect()

The interpretation of the benchmarks is that the data structure provided by this crate is generally slightly slower then the standard Binary Heap. On small queues (<10000 elements), also the change_priority function, obtained on the standard Binary Heap with the code above, is roughly as fast as the one provided by PriorityQueue. With the queue becoming bigger though, PriorityQueue performs much faster on priority change operations.

Contributing

Feel free to contribute to this project with pull requests and/or issues. All contribution should be under a license compatible with the GNU LGPL and with the MPL.

Changes

  • 1.0.5 Bug fix: #28

  • 1.0.4 Bug fix: #28

  • 1.0.3 Bug fix: #26

  • 1.0.2 Added documentation link to Cargo.toml so the link is shown in the results page of crates.io

  • 1.0.1 Documentation

  • 1.0.0 This release contains breaking changes!
    • From and FromIterator now accept custom hashers -- Breaking: every usage of from and from_iter must specify some type to help the type inference. To use the default hasher (RandomState), often it will be enough to add something like

      let pq: PriorityQueue<_, _> = PriorityQueue::from...

      or you can add a type definition like

      type Pq<I, P> = PriorityQueue<I, P>

      and then use Pq::from() or Pq::from_iter()

    • Support no-std architectures

    • Add a method to remove elements at arbitrary positions

    • Remove take_mut dependency -- Breaking: change_priority_by signature has changed. Now it takes a priority_setter F: FnOnce(&mut P). If you want you can use the unsafe take_mut yourself or also use std::mem::replace

  • 0.7.0 Implement the push_increase and push_decrease convenience methods.

  • 0.6.0 Allow the usage of custom hasher

  • 0.5.4 Prevent panic on extending an empty queue

  • 0.5.3 New implementation of the Default trait avoids the requirement that P: Default

  • 0.5.2 Fix documentation formatting

  • 0.5.1 Add some documentation for iter_mut()

  • 0.5.0 Fix #7 implementing the iter_mut features

  • 0.4.5 Fix #6 for change_priority and change_priority_by

  • 0.4.4 Fix #6

  • 0.4.3 Fix #4 changing the way PriorityQueue serializes. Note that old serialized PriorityQueues may be incompatible with the new version. The API should not be changed instead.

  • 0.4.2 Improved performance using some unsafe code in the implementation.

  • 0.4.1 Support for serde when compiled with --features serde. serde marked as optional and serde-test as dev-dipendency. Now compiling the crate won't download and compile also serde-test, neither serde if not needed.

  • 0.4.0 Support for serde when compiled with cfg(serde)

  • 0.3.1 Fix #3

  • 0.3.0 Implement PartialEq and Eq traits

Comments
  • Functional incorrectness: criterion used for item identity is hash value instead `Eq`-identity

    Functional incorrectness: criterion used for item identity is hash value instead `Eq`-identity

    A very surprising defect. PriorityQueue ignores the Eq-implementation of the item type. Whether or not an item is already in the queue should not be determined by an implementation detail leaked from the underlying data structure, namely its hash value. Item identity should be determined by its Eq-identity.

    This defect also decreases performance in cases where comparing identities costs less than comparing hash values, which I presume is always.

    invalid 
    opened by sanmai-NL 18
  • DoublePriorityQueue panicked at 'called `Option::unwrap()` on a `None` value'

    DoublePriorityQueue panicked at 'called `Option::unwrap()` on a `None` value'

    I have now convinced myself that this is a bug in the library, as I'm commonly hitting this unwrap() that's inside priority_queue code. My use case is a Dijkstra implementation that uses

    while let Some((current_node, current_distance)) = queue.pop_min() {
    

    to get the next node out of a DoublePriorityQueue<usize, u32>. For several of my data sets and queue sizes of >~ 1 million elements, I'm hitting a None unwrap here:

    thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', priority-queue-1.2.0\src\double_priority_queue\mod.rs:611:22
    stack backtrace:
       0: std::panicking::begin_panic_handler
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\/library\std\src\panicking.rs:495
       1: core::panicking::panic_fmt
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\/library\core\src\panicking.rs:107
       2: core::panicking::panic
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\/library\core\src\panicking.rs:50
       3: enum$<core::option::Option<tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >, 1, 18446744073709551615, Some>::unwrap<tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >  
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\option.rs:746
       4: priority_queue::double_priority_queue::impl$8::heapify_min::closure$1<usize,u32,std::collections::hash::map::RandomState>
                 at priority-queue-1.2.0\src\double_priority_queue\mod.rs:607
       5: core::iter::adapters::map::map_fold::closure$0<tuple$<ref$<usize>,ref$<usize> >,tuple$<ref$<usize>,ref$<usize>,ref$<u32> >,tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,priority_queue::double_priority_queue::impl$8::heapify_min::closure$1,
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\map.rs:84
       6: core::iter::adapters::filter_map::filter_map_fold::closure$0<ref$<usize>,tuple$<ref$<usize>,ref$<usize> >,tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0,core::iter::adapt
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\filter_map.rs:37
       7: core::iter::traits::iterator::Iterator::fold<core::slice::iter::Iter<usize>,tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,core::iter::adapters::filter_map::filter_map_fold::closure$0>
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\traits\iterator.rs:2171
       8: core::iter::adapters::filter_map::impl$2::fold<tuple$<ref$<usize>,ref$<usize> >,core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0,tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,core::iter::
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\filter_map.rs:85
       9: core::iter::adapters::map::impl$2::fold<tuple$<ref$<usize>,ref$<usize>,ref$<u32> >,core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0>,priority_queue::double_pri
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\map.rs:124
      10: core::iter::adapters::map::impl$2::fold<tuple$<ref$<u32>,tuple$<ref$<usize>,ref$<usize>,ref$<u32> > >,core::iter::adapters::map::Map<core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\adapters\map.rs:124
      11: core::iter::traits::iterator::Iterator::reduce<core::iter::adapters::map::Map<core::iter::adapters::map::Map<core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0>,
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\traits\iterator.rs:2216
      12: core::iter::traits::iterator::Iterator::min_by<core::iter::adapters::map::Map<core::iter::adapters::map::Map<core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0>,
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\traits\iterator.rs:2791
      13: core::iter::traits::iterator::Iterator::min_by_key<core::iter::adapters::map::Map<core::iter::adapters::filter_map::FilterMap<core::slice::iter::Iter<usize>,priority_queue::double_priority_queue::impl$8::heapify_min::closure$0>,priority_queue::double_prio
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\iter\traits\iterator.rs:2763
      14: priority_queue::double_priority_queue::DoublePriorityQueue<usize,u32,std::collections::hash::map::RandomState>::heapify_min<usize,u32,std::collections::hash::map::RandomState>
                 at priority-queue-1.2.0\src\double_priority_queue\mod.rs:596
      15: priority_queue::double_priority_queue::DoublePriorityQueue<usize,u32,std::collections::hash::map::RandomState>::heapify<usize,u32,std::collections::hash::map::RandomState>
                 at priority-queue-1.2.0\src\double_priority_queue\mod.rs:585
      16: priority_queue::double_priority_queue::impl$5::pop_min::closure$0<usize,u32,std::collections::hash::map::RandomState>
                 at priority-queue-1.2.0\src\double_priority_queue\mod.rs:290
      17: enum$<core::option::Option<usize> >::and_then<usize,tuple$<usize,u32>,priority_queue::double_priority_queue::impl$5::pop_min::closure$0>
                 at /rustc/efd0483949496b067cd5f7569d1b28cd3d5d3c72\library\core\src\option.rs:1055
      18: priority_queue::double_priority_queue::DoublePriorityQueue<usize,u32,std::collections::hash::map::RandomState>::pop_min<usize,u32,std::collections::hash::map::RandomState>
                 at priority-queue-1.2.0\src\double_priority_queue\mod.rs:288
      19: [above caller code]
    

    I'm too unfamiliar with the library internals to track the cause down. However, this is an example of a queue state that seems to cause it:

    {
        2: (
            13,
            12183,
        ),
        4: (
            572405,
            12885,
        ),
        5: (
            5146,
            12947,
        ),
        1: (
            572406,
            12473,
        ),
        0: (
            570354,
            12327,
        ),
        12: (
            572397,
            12302,
        ),
        9: (
            569824,
            12186,
        ),
        7: (
            572381,
            12569,
        ),
        10: (
            572342,
            12712,
        ),
        11: (
            119993,
            12350,
        ),
        3: (
            572398,
            12495,
        ),
        6: (
            572382,
            12496,
        ),
        8: (
            572408,
            12675,
        ),
    }
    

    Happens on Rust Nightly 2021-10-27 and Rust Stable 1.56.0 x86_64-pc-windows-msvc.

    bug 
    opened by kleinesfilmroellchen 7
  • Function to remove item with least priority

    Function to remove item with least priority

    Something like PriorityQueue::pop() but it removes the item with the least priority instead of the most priority. I was looking at the code and thought this could be accomplished by using VecDeque instead of Vec (since VecDeque allows for popping from both sides), but I don't know enough to know where to even start implementing it. Perhaps creating a new type called PriorityDeque would be a good idea. For the time being, I will just set my priorities to usize::MAX - priority to take advantage of the .pop() function and then do it back so I can use my initial priority values.

    duplicate 
    opened by zyansheep 7
  • Double license under MPL 2 and LGPL 3

    Double license under MPL 2 and LGPL 3

    Dear contributors, Because of how Cargo works, it is currently easier to statically link external crates.

    The GNU LGPL license, however, impose to link dynamically.

    I am opening this issue to get your consent about double licensing this crate under both the LGPL and the MPL.

    The Mozilla Public License is still a weak copyleft license, similar to the LGPL, but allowing static linking to software with a different license.

    @lnicola @tsionyx @BourgondAries @ocecaco @petr-tik

    opened by garro95 6
  • Require PartialOrd instead of Ord for priorities

    Require PartialOrd instead of Ord for priorities

    I was using priority-queue for implementing pathfinding and I ran into an issue when trying to use floats as priorities. Priorities require Ord, but f32 does not implement it, only PartialOrd. I was curious if Ord is really necessary, and as far as I can tell it's not. I replaced all Ord bounds with PartialOrd and it still compiled.

    I'm new to Rust, so please let me know if I missed something.

    Thank you!

    opened by kpinter3-14 5
  • serde failing with non-primitive items

    serde failing with non-primitive items

    I've been testing priority-queue for use in a home grown simulator framework. One of the things I need is for the queue and its contents to be serializeable via serde. When I test priority-queue using primitive types for the items, this works, but if I test using non-primitive types for the items, I get a the error string thread 'main' panicked at 'called Result::unwrap() on an Err value: ErrorImpl { code: KeyMustBeAString, line: 0, column: 0 }', src/libcore/result.rs:906:4. The following code should illustrate the problem; look in the area under the FAILING block.

    #[macro_use]
    extern crate serde_derive;
    
    extern crate serde;
    extern crate serde_json;
    extern crate uuid;
    extern crate priority_queue;
    
    use priority_queue::PriorityQueue;
    use std::cmp::{Ord, PartialOrd, Ordering};
    use std::default::Default;
    use std::time::Duration;
    use uuid::Uuid;
    
    // Abusing Duration as a mutable std::time::Instant
    type ActivationDate = Duration;
    
    /// Events are compared by EventComparables instances.
    ///
    /// EventComparables instances are similar to instances of time, but with the
    /// extra wrinkle of having a Uuid instance.  When EventComparables instances
    /// are compared, they are first compared by their activation date, with the
    /// date that occurs earlier being greater than a date that occurs later. This
    /// ordering exists because of how priority_queue::PriorityQueue works; it is
    /// naturally a max priority queue; using this ordering makes it a min
    /// priority queue. EventComparables go one step beyond using time as the key
    /// though; they  also have uuid::Uuid instances which are used to guarantee
    /// that every EventComparables is unique.  This ensures that if a set of
    /// events all  occur at the same time, they will still be executed in a
    /// deterministic order, every single time the queue's contents are executed.
    /// This is  critical for deterministic simulators.
    #[serde(default)]
    #[serde(deny_unknown_fields)]
    #[derive(Copy, Clone, Eq, PartialEq, Hash, Serialize, Deserialize, Debug)]
    struct EventComparables {
        /// This is when the event will be fired.
        activation_date: ActivationDate,
    
        /// This is a unique ID.  Except for ensuring that different events are
        /// guaranteed to compare as being different, it has no purpose.
        id: Uuid
    }
    
    /// Default events activate at time (0, 0)
    ///
    /// All default events first at time (0, 0), but every single one has a unique
    /// id.  This ensures that regardless of the number of default events you
    /// create, they will always execute in the same order.
    impl Default for EventComparables {
        fn default() -> Self {
            EventComparables {activation_date: ActivationDate::new(0,0), id: Uuid::new_v4()}
        }
    }
    
    /// The priority queue depends on `Ord`. Explicitly implement the trait so the
    /// queue becomes a min-heap instead of a max-heap.
    impl Ord for EventComparables {
        fn cmp(&self, other: &Self) -> Ordering {
    
            // Notice that the we flip the ordering on costs. In case of a tie we
            // compare by id - this step is necessary to make implementations of
            // `PartialEq` and `Ord` consistent.
    
            other.activation_date.cmp(&self.activation_date)
                .then_with(|| self.id.cmp(&other.id))
        }
    }
    
    // `PartialOrd` needs to be implemented as well.
    impl PartialOrd for EventComparables {
        fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
            Some(self.cmp(other))
        }
    }
    
    /// A fake event to fire on some date.
    ///
    /// This is a fake event that I'll fire when the corresponding
    /// EventComparables instance comes up.  The contents are immaterial; I'm just
    /// using it for testing
    #[serde(default)]
    #[serde(deny_unknown_fields)]
    #[derive(Copy, Clone, Eq, PartialEq, Hash, Serialize, Deserialize, Debug)]
    struct ConcreteEvent1 {
        a: i32,
        b: i64
    }
    
    impl Default for ConcreteEvent1 {
        fn default() -> Self {
            ConcreteEvent1 {a: 0, b: 0}
        }
    }
    
    //////////////////////////////////////////////////////////////////////////////
    // Test 1
    //////////////////////////////////////////////////////////////////////////////
    
    fn test1() {
        println!("test1()");
    
        type PqType = PriorityQueue<i32, i32>;
    
        let mut pq: PqType = PriorityQueue::new();
        pq.push(0, 0);
        pq.push(1, 1);
    
        let serialized = serde_json::to_string(&pq).unwrap();
        println!("serialized = {:?}", serialized);
        let deserialized: PqType = serde_json::from_str(&serialized).unwrap();
        println!("deserialized = {:?}", deserialized);
    }
    
    //////////////////////////////////////////////////////////////////////////////
    // Test 2
    //////////////////////////////////////////////////////////////////////////////
    
    fn test2() {
        println!("\n\ntest2()");
    
        type PqType = PriorityQueue<i32, EventComparables>;
    
        let mut pq: PqType = PriorityQueue::new();
        pq.push(0, Default::default()); // Uuids will be different
        pq.push(1, Default::default());
    
        let serialized = serde_json::to_string(&pq).unwrap();
        println!("serialized = {:?}", serialized);
        let deserialized: PqType = serde_json::from_str(&serialized).unwrap();
        println!("deserialized = {:?}", deserialized);
    }
    
    //////////////////////////////////////////////////////////////////////////////
    // Test 3
    //////////////////////////////////////////////////////////////////////////////
    
    fn test3() {
        println!("\n\ntest3()");
    
        // Create some concrete events and comparables, and test to see that they
        // can be serialized/deserialized
    
        let ce1 = ConcreteEvent1{a: 12, b: 45};
        let ec1 = EventComparables {activation_date: ActivationDate::new(0, 1), id: Uuid::new_v4()};
    
        let ce2 = ConcreteEvent1{a: 37, b: 123};
        let ec2 = EventComparables {activation_date: ActivationDate::new(0, 1), id: Uuid::new_v4()};
    
        let serialized = serde_json::to_string(&ce1).unwrap();
        println!("serialized = {:?}", serialized);
        let deserialized: ConcreteEvent1 = serde_json::from_str(&serialized).unwrap();
        println!("deserialized = {:?}", deserialized);
        assert_eq!(ce1, deserialized);
    
        let serialized = serde_json::to_string(&ec1).unwrap();
        println!("serialized = {:?}", serialized);
        let deserialized: EventComparables = serde_json::from_str(&serialized).unwrap();
        println!("deserialized = {:?}", deserialized);
        assert_eq!(ec1, deserialized);
    
        let serialized = serde_json::to_string(&ce2).unwrap();
        println!("serialized = {:?}", serialized);
        let deserialized: ConcreteEvent1 = serde_json::from_str(&serialized).unwrap();
        println!("deserialized = {:?}", deserialized);
        assert_eq!(ce2, deserialized);
    
        let serialized = serde_json::to_string(&ec2).unwrap();
        println!("serialized = {:?}", serialized);
        let deserialized: EventComparables = serde_json::from_str(&serialized).unwrap();
        println!("deserialized = {:?}", deserialized);
        assert_eq!(ec2, deserialized);
    
    /****************************************************************************
     ****************************************************************************
     ****************************************************************************
    
               ########    ###    #### ##       #### ##    ##  ######
               ##         ## ##    ##  ##        ##  ###   ## ##    ##
               ##        ##   ##   ##  ##        ##  ####  ## ##
               ######   ##     ##  ##  ##        ##  ## ## ## ##   ####
               ##       #########  ##  ##        ##  ##  #### ##    ##
               ##       ##     ##  ##  ##        ##  ##   ### ##    ##
               ##       ##     ## #### ######## #### ##    ##  ######
    
     ****************************************************************************
     ****************************************************************************
     ****************************************************************************/
    
        if true {
            type PqType = PriorityQueue<ConcreteEvent1, EventComparables>;
    
            let mut pq: PqType = PriorityQueue::new();
            pq.push(ce1, ec1);
            pq.push(ce2, ec2);
    
            let serialized = serde_json::to_string(&pq).unwrap();
            println!("serialized = {:?}", serialized);
            let deserialized: PqType = serde_json::from_str(&serialized).unwrap();
            println!("deserialized = {:?}", deserialized);
        }
    }
    
    //////////////////////////////////////////////////////////////////////////////
    // main()
    //////////////////////////////////////////////////////////////////////////////
    
    fn main() {
        test1();
        test2();
        test3();
    }
    
    bug 
    opened by ckaran 5
  • Make `change_priority` API consistent

    Make `change_priority` API consistent

    Currently for either PriorityQueue or DoublePriorityQueue, the return value for change_priority and change_priority_by functions are different.

    In some cases where a customized (and complicated) struct P is used, change_priority_by is usually preferred which provides flexibility to update the underlying structure. However, due to nothing is returned, the caller has no idea whether the update is successful or not. There are few options

    1. make change_priority_by return an Option type (same as change_priority
    2. Call get_priority first. Based on result and then call change_priority_by
    3. Call push

    2 and 3 are obviously not ideal, as Option2 caused IndexMap lookup twice; Option3 may cause a clone This PR submits code change for Option1

    opened by liwenjieQu 4
  • changing priorities via `iter_mut` doesn't work as I expected

    changing priorities via `iter_mut` doesn't work as I expected

    hello,

    Thank you for your work on this library!

    Based on the documentation, I had thought that changing the priorities of items in the course of iterating over them via iter_mut would update their priority rankings in the queue. However, unless I am missing something, that does not seem to be the case.

    Here is a simple example of what I mean in the form of a test:

    #[cfg(test)]
    mod tests {
        use std::time::*;
    
        #[test]
        fn verify_how_priority_queue_works() {
            let mut queue: priority_queue::PriorityQueue<&'static str, Duration> = Default::default();
    
            queue.push("a", Duration::new(0, 0));
            queue.push("b", Duration::new(0, 1));
            assert_eq!(queue.peek().unwrap().0, &"b");
            for (k, v) in queue.iter_mut() {
                if k == &"a" {
                    *v = Duration::new(0, 2);
                }
            }
            assert_eq!(queue.peek().unwrap().0, &"a"); // this assertion fails
        }
    }
    

    the peek call instead returns the "b" item.

    if this test is modified to use queue.change_priority("a", Duration::new(0, 2)) rather than assigning to the priority as it is above, the return value of peek is "a" as expected.

    The docs say:

    It's not an error, instead, to modify the priorities, because the heap will be rebuilt once the IterMut goes out of scope.

    Based on this, I had thought that changing the value of the priority when iterating over the queue via iter_mut would work the same as change_priority.

    Could you tell me if this a bug, or a misunderstanding of how `PriorityQueue works?

    what would be a good way to update all of the priorities at once?

    thank you for your help on this!

    bug 
    opened by jstrong-tios 4
  • Some usability improvements + tests/docs

    Some usability improvements + tests/docs

    I'm using this crate to implement a timer-based reactor for callbacks, unfortunately std::time::Instant does not impl Default so I can't create a default object of PriorityQueue.

    There is no good reason for P to be Default, so this patch series implements Default manually by relegating to new, which works for all P: Ord. Deriving default appears to add a hidden requirement for P: Default.

    opened by BourgondAries 4
  • Bug in PriorityQueue::remove()

    Bug in PriorityQueue::remove()

    The PriorityQueue::remove() does not work correctly in some cases.

    Here is minimal reproducer:

    use std::collections::hash_map::RandomState;
    
    use priority_queue::PriorityQueue;
    
    fn main() {
        let mut queue = PriorityQueue::<i32, i32, RandomState>::default();
    
        for i in 0..7 {
            queue.push(i, i);
        }
    
        queue.remove(&0);
    
        let mut last_priority = *queue.peek().unwrap().1;
        while let Some((i, priority)) = queue.pop() {
            dbg!(priority);
            assert!(last_priority >= priority);
            last_priority = priority;
        }
    }
    

    This will pop elements in order 6, 5, 3, 4, which is clearly not correct.

    I have partially debugged it and I think the map, heap and qp are updated correctly and remain consistent, but the actual algorithm for removing element from heap is not correct. It seems that the remove() function assumes that heapify() will perform up-heapify or down-heapify as needed, but it in fact only performs down-heapify. So in the case when up-heapify should have happened, the "heap" vector is no longer an actual heap.

    bug 
    opened by michalsrb 3
  • remove elements at arbitrary positions

    remove elements at arbitrary positions

    I know that this might be out of scope for a traditional priority queue, but what do you think of supporting removing elements at arbitrary positions?

    I'd like to track cached chunks in a priority queue (least priority -> first to be dropped when a different chunk is no longer needed and moved to the cache), and when a chunk is moved into the "real" data structure it needs to be removed from the cache priority queue to prevent it from being removed too early.

    enhancement 
    opened by Trolldemorted 3
  • iter_over(priority) and iter_under(priority)

    iter_over(priority) and iter_under(priority)

    Similar to #35 , the reasoning is the same. Return an iter with elements above or under a priority.

    Consider the following, which is illegal by rust's standards:

    while let Some((obj, priority)) = queue.peek_max() {
        if priority > 5 { 
            queue.remove(obj)
       } else {
           break; // All elements in future iterations this will be < 5
       }
    }
    

    This is because you cannot mutably modify an object when you have immutable borrows (obj, in this case)

    Thus, the equivalent workaround in the same time complexity would be:

    let keys = Vec::new();
    while let Some((obj, priority)) = queue.peek_max() {
        if priority > 5 {
            keys.add(obj.clone()) 
        } else {
            break;
        }
    }
    for key in keys {
        queue.remove(obj)
    }
    

    But in fact, this would infinitely block since peek_max() is never being changed.

    This means the best you can do is O(n) because you are required to iterate over every element.

    This could be done by sorting O(logn) and returning a slice over the sorted structure.

    Proposed example would be:

    while let Some((obj, priority)) = queue.iter_over(5) {
        println!("{}", obj) 
    }
    
    enhancement 
    opened by simbleau 2
  • clear_over(priority) and clear_under(priority)

    clear_over(priority) and clear_under(priority)

    Implement two methods : clear_under() and clear_over() that accept a priority as an argument.

    This will essentially truncate the size of the data structure, marking all items over/under this priority as trash/nonexistent.

    This would be an O(log(n)) complex operation, as It would take a maximum of log(n) time to sort the structure, and thus, find the upper bound and lower bound of this priority. It would be O(1) to mark the rest of the dataset as trash. (e.g. an iter would return a slice of the min element to this new priority)

    This would be a great convenience as it would give this data structure the ability to greatly speed up pruning operations. Currently pruning is O(n) at best, as you can only remove one element at a time. (It is possible to prune the entire set in O(1) using clear but that is besides my point)

    enhancement 
    opened by simbleau 3
  • Documentation does not specify behavior for equal priorities

    Documentation does not specify behavior for equal priorities

    How will the items be sorted and popped, if their priority is equal? Is it insertion order? Is it arbitrary order? Is the order deterministic/repeatable? Does it depend on hasher? It would be nice if the documentation specified what minimum guarantee I can assume. At the moment I must assume the worst case scenario of completely random order, since the documentation doesn't indicate otherwise.

    question 
    opened by MatejSloboda 3
  • Export IndexMap Equivalent trait

    Export IndexMap Equivalent trait

    IndexMap Equivalent trait allows to have a special implementation of Eq to use inside the hashmap but different from the ==. IndexMap provide a default implementation of Equivalent for types that implement Eq, that simply apply ==.

    opened by garro95 0
Owner
null
disk backed wal queue

Repository Template  Queue like disk backed WAL Pronouced Quál - from the german wordrd for agony - because it is. Operations The basic concept is si

The Tremor Project 8 Jun 4, 2022
A lock-free multi-producer multi-consumer unbounded queue.

lf-queue A lock-free multi-producer multi-consumer unbounded queue. Examples [dependencies] lf-queue = "0.1" Single Producer - Single Consumer: use lf

Pierre Brouca 2 Sep 11, 2022
Message/job queue based on bonsaidb, similar to sqlxmq.

Bonsaimq Simple database message queue based on bonsaidb. The project is highly influenced by sqlxmq. Warning: This project is in early alpha and shou

Flix 6 Nov 8, 2022
A scalable message queue powered by a segmented, partitioned, replicated and immutable log.

A scalable message queue powered by a segmented, partitioned, replicated and immutable log. This is currently a work in progress. laminarmq is intende

Arindam Das 20 Dec 16, 2022
Fast, efficient, and robust memory reclamation for concurrent data structures

Seize Fast, efficient, and robust memory reclamation for concurrent data structures. Introduction Concurrent data structures are faced with the proble

Ibraheem Ahmed 240 Dec 23, 2022
Rust-algorithm-club - Learn algorithms and data structures with Rust

Rust Algorithm Club ?? ?? This repo is under construction. Most materials are written in Chinese. Check it out here if you are able to read Chinese. W

Weihang Lo 360 Dec 28, 2022
Ternary search tree collection in rust

tst Ternary search tree collection in rust with similar API to std::collections as it possible. Ternary search tree is a type of trie (sometimes calle

Alexey Pervushin 20 Dec 7, 2022
Array helpers for Rust's Vector and String types

array_tool Array helpers for Rust. Some of the most common methods you would use on Arrays made available on Vectors. Polymorphic implementations for

Daniel P. Clark 69 Dec 9, 2022
Generic array types in Rust

generic-array This crate implements generic array types for Rust. Requires minumum Rust version of 1.36.0, or 1.41.0 for From<[T; N]> implementations

Bartłomiej Kamiński 325 Dec 25, 2022
K-dimensional tree in Rust for fast geospatial indexing and lookup

kdtree K-dimensional tree in Rust for fast geospatial indexing and nearest neighbors lookup Crate Documentation Usage Benchmark License Usage Add kdtr

Rui Hu 152 Jan 2, 2023
Roaring bitmap implementation for Rust

RoaringBitmap This is not yet production ready. The API should be mostly complete now. This is a Rust port of the Roaring bitmap data structure, initi

Roaring bitmaps: A better compressed bitset 552 Jan 1, 2023
Rust Persistent Data Structures

Rust Persistent Data Structures Rust Persistent Data Structures provides fully persistent data structures with structural sharing. Setup To use rpds a

Diogo Sousa 883 Dec 31, 2022
Rust crate to extend io::Read & io::Write types with progress callbacks

progress-streams Rust crate to provide progress callbacks for types which implement io::Read or io::Write. Examples Reader extern crate progress_strea

Pop!_OS 19 Dec 3, 2022
Parameterized routing for generic resources in Rust

Usher Usher provides an easy way to construct parameterized routing trees in Rust. The nodes of these trees is naturally generic, allowing Usher to le

Isaac Whitfield 34 Oct 22, 2022
RiteLinked - LinkedHashMap & LinkedHashSet in Rust

RiteLinked -- HashMap-like containers that hold their key-value pairs in a user controllable order RiteLinked provides more up to date versions of Lin

Rite Database 52 Aug 19, 2022
A proof of concept implementation of cyclic data structures in stable, safe, Rust.

A proof of concept implementation of cyclic data structures in stable, safe, Rust. This demonstrates the combined power of the static-rc crate and the

null 157 Dec 28, 2022
Doubly-Linked List Implementation in Rust

Doubly-Linked List Implementation in Rust Purely for educational and recreational purposes. For real world production please use std::collections::Lin

Tsoding 9 Jul 28, 2022
Rust library for string parsing of basic data structures.

afmt Simple rust library for parsing basic data structures from strings. Usage You can specify string formats to any strucute, via the use of the fmt

Eduard 4 May 8, 2021
Hash Table Implementation in Rust

Hash Table Implementation in Rust Purely for educational and recreational purposes. For real world production please use std::collections::HashMap. Fo

Tsoding 11 Sep 6, 2022