New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I wait for a future in multiple threads #173
Comments
Strictly speaking the issue as titled is not possible. To poll a future it requires What you may want to do is to create your own custom implementation of Does that make sense? |
Yes, as I explained I've done something like this in my code already and it is working. The question, I guess, is if you would be interested in this being submitted as a pull request for inclusion in the futures library itself, or if I should just go ahead and implement it separately. |
Perhaps! Mind if I take a peek at what you've got going on? |
Well it's kind of mixed up in the other code, but it's in this file: Here's where I keep a map of "additional" clients waiting for a future to complete, presumably the map key would stay the same but the value would be a struct mapping the desired behaviour: Then in this function, I either create a direct future to produce a result, or I add a new oneshot to the list above. The first one will fire off all the others on completion: The idea is to refactor out this logic into some kind of reusable future. Presumably it could be constructed by consuming an existing future into some kind of other object, which could then hand out regular futures to be used and consumed by waiting as normal. Here's a simple example of how you might use it for lazy loading: let lazy_loader_future =
create_lazy_loader ();
let lazy_loader_multiplexer =
futures::multiplexer (lazy_loader_future);
// ...
for key in items {
init_some_module (
key,
lazy_loader_multiplexer.spawn_future ());
} |
That actually sounds pretty reasonable to me, this looks to be like some form of memoization going on I think? Perhaps something along the lines of: pub fn cell<T, E>() -> (Cell<T, E>, Complete<T, E>) {
// ...
}
impl Future for Cell<T, E> {
type Item = CellComplete<T, E>;
type Error = Canceled;
// ...
}
impl<T, E> Clone for Cell<T, E> {
// ...
}
impl<T, E> Deref for CellComplete<T, E> {
type Target = Result<T, E>;
// ...
} That is, you'd have a |
Ok, so in your example the To be honest, I feel like there should be more API support for directly plumbing together |
I think a more general solution should be written, that will make it easier to wait for a task completion on multiple threads. |
@jamespharaoh I'm not sure I quite understand what you mean by plumbing together futures and completes? In theory that's exactly what @liran-ringel perhaps yeah! Just trying to figure out what that more general solution is :) |
(oops didn't mean to close) |
@alexcrichton A solution might look like that: |
@liran-ringel yeah that's actually basically what I was proposing above as well! |
I'll put something together for everyone to check out. I have a feeling I can actually go one better than we've discussed but let's see what the borrow checker has to say about that before I promise anything ;-) @alexcrichton I mean that I feel like |
@jamespharaoh ok! Looking forward to seeing what you come up with |
There's more work to be done, but I'd like you @alexcrichton to check if it's the right direction: |
@liran-ringel Sorry for the delay, but I've now taken a look! I think that's along the right lines semantically but we'll likely want enable us to add a more efficient implementation eventually as well. I wonder if we could perhaps have an interface like: trait Future {
// ...
fn shared(self) -> Shared<Self> {
// ...
}
}
struct Shared<F> {
// ...
}
impl<F> Clone for Shared<F> {
// ...
}
impl<F> Future for F {
type Item = SharedItem<F::Item>;
type Error = SharedError<F::Error>;
// ...
}
struct SharedItem<T> { /* ... */ }
struct SharedError<T> { /* ... */ }
impl<T> Deref for SharedItem<T> {
type Target = T;
// ...
}
impl<T> Deref for SharedError<T> {
type Target = T;
// ...
} That is, we could hide the abstraction of an |
@alexcrichton Before I came with the previous solution I tried to do something similar to the To simplify that, I created a future I will try again to implement the |
Yes depending on the interface you might have problems with synchronization or not. Right now though the memory requirement is O(number of clones) which seems unfortunate when it should in theory be O(1)? |
I've also had a play with this but got a little confused trying to implement my own future. I'm still quite new to Rust and so am still getting used to some of the more complex things. The "clever" thing I am still hoping to do, however, is to provide some way to simply clone a future, and have it work as expected. I now believe it should be possible to implement Firstly, does anyone with more knowledge than me think this is possible? Secondly, do people think this is desirable? Underneath, the implementation would replace the boxed future with a Of course, the |
I don't think we'll be able to clone arbitrary futures, but cloning a particular future seems reasonable to me. That is, creating a custom future wrapper that's cloneable sounds like it can work. Other than that sounds reasonable! |
Yeah well that was what the Possibly we could provide transparent cloneability for boxed futures inside some kinds of wrappers, but again, it's just plumbing and not really necessary. I will have another crack at this when I get a chance. Got a bit confused with the library when I tried it before. I am guessing the complexity is related to the "zero-cost" design goal you discuss in your blog post about the library. |
Ok, let's see how far implementing the |
@alexcrichton What do you say about that? |
I didn't dig too much into the implementation details but at a conceptual level at least looks reasonable to me! |
I'm happy this has reached public release, and it has simplified my code a lot. I'm a bit embarrassed I didn't contribute anything concrete, but having used it, I have a suggestion for a further refinement. You can see my code here: https://github.com/wellbehavedsoftware/rzbackup/blob/master/src/misc/cloning_shared_future.rs Basically this transforms a Probably you could add a Let me know what you think and I can do a pull request into |
In fact, I wonder if |
@jamespharaoh I think a |
Makes sense to me too! Coming back to this issue I'm actually going to close this now that we have |
I can't seem to make this work, perhaps I'm being dumb! In any case, I think this should be in the documentation as an example.
In my case, I have background processes loading data for a given ID. Obviously if a piece of data is already being loaded then I want to join the existing waiters rather than starting a new background process.
I've implemented this using a
Map <Id, Vec <Complete>>>
, then the first loader triggers the completion of subsequent loaders when it has completed. This is a lot of boilerplate.I've tried all sorts of things to get this to work but somehow I can't get anything else to compile. Waiting for a future consumes it, so I can only do that once. I have tried to replace the future with a new one and wait on that, like I might do in JavaScript, but this also doesn't work.
If anyone can show me an example then that would be great, if not then I'll probably create an abstraction around my current method and submit this as a pull request for the library.
The text was updated successfully, but these errors were encountered: