Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider passing the Task that is driving the future to Future::poll and renaming the function #129

Closed
rjuse opened this issue Sep 6, 2016 · 49 comments

Comments

@rjuse
Copy link

rjuse commented Sep 6, 2016

Currently Future::poll seems to be expected to call task::park which then fetches the current task from TLS and panics if there is no task in TLS.

This results in an unintuitive API (it's not clear at first glance that poll()'s expected interface/implementation is related to tasks) and a potential run-time failure that could be checked at compile time.

So my suggestion is to instead pass the task driving the Future explicitly to Future::poll as an additional function argument, either as a Task reference, a closure calling task::park() (if that's enough), or a similar mechanism, instead of storing it in the TLS variable CURRENT_TASK.

Also, "poll" is a confusing name, since it creates the expectation that it is a function that anyone can call to get the value of the future if it has already completed, but it is in fact an "internal" function that drives future execution instead and currently even panics if called outside a task.

Something like "drive", "execute", "run", "run_next_step" or similar would be a better name.

@aturon
Copy link
Member

aturon commented Sep 6, 2016

FWIW, we actually started with a design that passed an explicit &mut Task argument. That ended up posing problems for things like implementing the direct Read trait, which also wants access to a task. Mediating this through thread-local storage made it possible to support standard I/O APIs while implicitly accessing the task, and also greatly improved ergonomics.

I do agree that thread-local storage has downsides, in terms of being less explicit and in our case opening you up to runtime errors. However, I think that once you understand the basics of the library, this kind of mistake is very rare and will very reliably product panics (so shouldn't be hard to debug).

Regarding the naming, I can see what you mean but none of the suggested names particularly evoke "tasks" for me in a way that the current name doesn't.

@rjuse
Copy link
Author

rjuse commented Sep 6, 2016

I think implementing the Read trait this way might not be the optimal design (and also might be broken currently if it's called outside of a Future's poll function, although I suppose that's easily fixed by doing nothing if there is no current task).

A possible alternative that seems more sound is to implement, if desired, the Read trait in a way that is purely nonblocking and thus does not access the current task.

End users would normally use APIs that return futures (and that thus aren't Read or similar traits), while the internal implementation of those APIs can work with internal functions that take a task argument instead of the Read trait.

@aturon
Copy link
Member

aturon commented Sep 6, 2016

The key downside of that alternative is that the work of managing callbacks to the task is no longer automatically managed by the core I/O abstractions, as it is today. In other words, with today's setup, there's no way to create a "lost wakeup" when working with primitive I/O, because the interaction with the task is handled automatically. Moving to this model was a big part of what changed when merging futures-mio into tokio-core, and I think it was a big improvement.

@aturon
Copy link
Member

aturon commented Sep 6, 2016

I think a more plausible alternative would be to take an explicit task argument everywhere, including in read and write. That would have the benefit of making the task requirement more explicit, and making a panic impossible, while retaining the guarantees of the current implementation. That might enable some simplifications in today's executor story, but I'm not sure.

The downsides would include:

  • Loss of ability to use standard I/O traits and hence loss of some interoperation
  • Loss in ergonomics

@rjuse
Copy link
Author

rjuse commented Sep 6, 2016

But aren't end users supposed to normally use APIs that return futures, which already implicitly handle everything correctly?

This is how other future-based systems work; for instance, in .NET, to read data, a TCP server would call Stream.ReadAsync which returns a future.

The Read trait can just not be implemented at all and then if the only APIs available either return futures, or require passing the task explicitly, it's also impossible to have lost wakeups.

Being able to use standard traits with implicit wakeups seems to be unsound in general and thus have problems in nontrivial code: for example, if someone sends the I/O stream to another thread (perhaps due to some automatic parallelization framework, or because they are using an Actor-like model like Servo), then the wakeup will not be registered to the proper task, since the other thread can't possibly know the right task unless the task is explicitly passed as well.

Of course it's possible to document the pitfalls and accept the potential for run-time issues in case of misuse, but it doesn't seem ideal.

@aturon
Copy link
Member

aturon commented Sep 6, 2016

But aren't end users supposed to normally use APIs that return futures, which already implicitly handle everything correctly?

It depends a little bit on which "end users" you have in mind. The current I/O integration does not expose the bottom-level reading/writing facilities as futures. That's because we don't want to impose ownership transfer of buffers.

Let's take reading for an example. You'll need a buffer to read into. If you represent this as a future of the read being complete, then you have to hand over an owned buffer for the whole time you're waiting for a read to occur. That in turn means you may have to have a large number of such buffers allocated and in flight, if you have a large number of concurrent reads.

By contrast, the Read trait is designed to allow you to pass in a buffer slice at the moment a read is attempted, without transferring ownership. In the futures world, you wait on readiness for a read (i.e. that there appear to be bytes available), and only at that point do you attempt the read. If in fact there aren't bytes ready, you haven't given up any buffer ownership, and are automatically scheduled to be notified for readiness again.

All of the above considerations really matter when you think about something like a proxy server, which ideally can serve any number of clients with just a single buffer. See the example proxy server where we put this into action.

@carllerche
Copy link
Member

Being able to use standard traits with implicit wakeups seems to be unsound in general and thus have problems in nontrivial code: for example, if someone sends the I/O stream to another thread (perhaps due to some automatic parallelization framework, or because they are using an Actor-like model like Servo), then the wakeup will not be registered to the proper task, since the other thread can't possibly know the right task unless the task is explicitly passed as well.

In the case you are describing, sending the I/O stream to another task is a supported use case. When an I/O stream is sent to another task, the original task will receive a spurious wakeup, but that is fine. The new task will receive the I/O stream, attempt to poll from it and the I/O stream will now be registered with the new task, which will be woken up once the stream is ready.

If you send a future to a thread that is not managed by a future task, then currently there is a panic, however I think a better alternative would be returning io::Error::new(io::ErrorKind::WouldBlock, ...). In this case, the receiving thread will either know what to do with the error or not, but this is already the case if it received a mio::TcpStream or any type in general that can return "would block" /cc @aturon

@rjuse
Copy link
Author

rjuse commented Sep 6, 2016

What I'm talking about is complex code that implements Future::poll() by spawning several scoped threads or otherwise sends work to be executed synchronously on a thread pool (perhaps implicitly due to using an auto-parallelization library like rayon) and then accesses a Tokio's TcpStream in those separate threads.

In this case, these read/write calls should register wakeups on the task driving the Future::poll, but since they are in separate threads, the TLS CURRENT_TASK variable will not be properly set, and either the function would panic or the wakeups would be silently missed.

Explicitly passing the task would obviously either make the code work correctly or cause a compilation error instead of having those issues.

Propagating TLS variables can fix some cases of this, but I don't think it can work in general and there might be issues even without threading (the Future::poll implementation for future A driven by task A somehow creating a closure that ends up executed inside the Future::poll implementation for future B driven by task B, and having the wakeups from that closure being registered for task B while instead they should have been registered for task A, due to the task not having being captured in the closure).

@rjuse
Copy link
Author

rjuse commented Sep 6, 2016

I wrote this to try to summarize and make the arguments of the thread clearer; it doesn't add much to the previous discussion in the thread, but might perhaps reframe it in a more orderly and easily followed form.

I think the first design question is whether libraries and programs that aren't directly interfacing with the OS should implement the Future trait themselves or should instead only or mainly create futures using combinators like map/and_then/etc or using the "oneshot" facility.

The SOCKS5 example actually uses both coding styles, implementing the handshake via future-based combinators (starting from the "read_exact" / "write_all" Future-based I/O APIs in Tokio), but the data transfer with a custom Future::poll implementation using Read/Write traits on TcpStream.

If users are not supposed to implement Future themselves, like in other designs such as .NET and JavaScript promises, then only future-returning APIs are used to create futures via combinators, or the user does the "wakeup" itself using oneshot, and thus the task-handling concerns are not present.

BTW, it is possible to have a Future-returning I/O API that doesn't keep buffers in flight, for example by returning a Future<ReadableHandle> and having a read method that takes the ReadableHandle and a buffer and actually reads the data in an instantaneous way once the future is completed (type ReadableHandle = () or usize is a simplified case of this).

If users are instead supposed to implement Future themselves (which might be needed in Rust to ensure optimal performance can be achieved, which is not a goal of other GC-based future designs), then it becomes useful to provide a low-level API that is non-blocking but not future-based, as well as a facility to register interest in unparking a task or both combined in a single API, for use in implementations of Future::poll().

The current design of futures-rs/tokio-io provides such a facility by implementing the Read/Write traits in a "special" way that consists of a normal non-blocking implementation that also "magically" fetches the current task from the TLS and potentially parks it and has it unparked when the stream becomes ready again.

This means that wakeups cannot be accidentally missed in simple code, but as far as I can tell can be missed in complex code that passes the I/O stream to other threads or creates, for example, closures that do I/O and are then potentially called by the Future::poll() of other unrelated futures/tasks.

The possible change explored in this issue is, assuming that letting users implement Future directly this way is useful, to make the wakeup registration explicit, and thus add a task argument to Future::poll() and either change the Read/Write implementations on tokio-io's TcpStream and similar to be purely non-blocking implementations and have separate ways to register the task to be unparked, or have read/write methods that mimic the Read/Write trait and take a task argument (the latter avoids potential missed wakeups, but is also less flexible and might result in unnecessary wakeups).

If Read/Write is not implemented, there would be a loss of interoperation with other code using standard I/O traits, but such code might assume that they are implemented in a blocking way as they would on a file or blocking socket, and might not work anyway on a non-blocking implementation as the one in a futures framework without full "green threads" must be (regardless of whether it interacts with task parking).

@carllerche
Copy link
Member

Just a quick note:

If Read/Write is not implemented, there would be a loss of interoperation with other code using standard I/O traits, but such code might assume that they are implemented in a blocking way as they would on a file or blocking socket, and might not work anyway on a non-blocking implementation as the one in a futures framework without full "green threads" must be (regardless of whether it interacts with task parking).

I would not say this is correct. I've found that I have been able more often than not to reuse existing code that assumes std::Read / std::Write. For example, the extension fns on the traits themselves "just work", also the various SSL libs that implemented SSL decoration for any Read + Write types worked out of the box.

@carllerche
Copy link
Member

To clarify, implementing Read / Write to handle would block situations is not a new idea introduced by futures/tokio, this is how non-blocking IO works today and is how it will continue to work today. If a user receives a would-block error, they are responsible for figuring out how to get notified when readiness changes. In the case of futures / tokio, it would be by using a task, in the case of mio, it would be by registering interest w/ Poll.

@rjuse
Copy link
Author

rjuse commented Sep 6, 2016

I just realized that there is a third option for read/write that might be the best: continue to implement the traditional Read/Write traits as they are implemented now in Tokio, but instead of implementing them directly on TcpStream, implement them on a struct containing a reference to the TcpStream and to the task.

In other words, an implementation of Future::poll() would call stream.read_for_task(task) with the task reference they got passed as an additional argument and get a value that implements Read and automatically registers wakeups on task, or potentially would use a closure-based API with stream.with_read_for_task(task, |read| ...).

This would remove the need to use TLS and make task passing explicit, while otherwise keeping the current code unchanged.

This API also allows the implementation to defer some of the wakeup registration work to the destructor of the struct implementing Read or at the end of the closure, in case that allows better performance.

@alexcrichton alexcrichton modified the milestone: 0.2 Sep 9, 2016
@rjuse
Copy link
Author

rjuse commented Sep 9, 2016

I tried implementing my last suggestion and I think it might not be the best due to a unforeseen issue: you can't reuse BufReader/BufWriter and similar Read/Write stateful adapters, because implementing Read for a (TcpStream, task) tuple requires recreating the Read implementation on every call to poll, while BufReader needs a persistent Read instance to work.

I have a new alternative idea: have TcpStream and friends return a token to register a wakeup as part of the error in would-block cases.

I think this is possible without changes to libstd, by using the Error::new constructor and then using Any/reflection to try to cast the custom error to a trait object implementing a "WakeupSource" trait with a method that would then be passed the result of task.park().

Since errors are already supposed to propagate up the call chain, this allows the caller to either use the Read/Write implementation as a pure nonblocking implementation, or if desired to match on the error, do the downcast and register the wakeup.

Unfortunately, it requires a memory allocation of the boxed error for the Error trait object every time waiting needs to happen, and also effectively requires a persistent Arc to reference from said trait object since it needs to be Send+Sync.

It's also possible to extend this paradigm to the Future::poll method as well by adding a wakeup source field to the Async::NotReady variant and use that instead of having Future::poll register wakeups directly, or even better by removing the Async datatype and instead using Result with a WouldBlock error to signify "not ready" just like the I/O traits.

@Diggsey
Copy link
Contributor

Diggsey commented Dec 30, 2016

I would prefer taking an explicit task argument everywhere. Using scoped thread locals is confusing for people new to the library, and takes control away from the user - it's the sort of thing I'd expect from a framework rather than a library.

Doing this would mean that the compiler can catch at compile time "obvious" places where no unpark event will be triggered (when the task parameter goes unused) and specialised lints could be even more intelligent.

Re: the argument about Read/Write traits - I actually think it's quite surprising, and in some sense broken, that the Read/Write implementations on tokio objects automatically register wakeups. Most Read/Write implementations do not do that. This means that given only a Read bound, you cannot tell whether you need to trigger wakeups yourself, or whether it's safe to call outside of a task.

Far better would be to introduce new traits, which take a task as a parameter, and encapsulate both reading and registering a wakeup event. The task parameter is self documenting, and eliminates any possibility of confusion. It also allows for code to be generic over these new traits.

Using a task parameter also opens up some other possibilities: one could implement custom futures-like objects which take a different type of task type, to provide additional context. As long as this type is convertible to a task it would be able to retain compatibility with existing code.

Finally, there are other concerns with using thread locals:

  • How much slower are thread-locals compared to just passing around a pointer? After all, one of the big selling points of futures is that they're "as fast" as hand-written code.
  • What about environments without thread-local support?
  • Must make sure to call task::park() on-thread. This will be more of a problem with higher level abstractions.

@Diggsey
Copy link
Contributor

Diggsey commented Jan 2, 2017

I hadn't seen this before, but this issue for the disparity between tokio/std Read/Write implementations already exists: tokio-rs/tokio-core#61

@kkimdev
Copy link
Contributor

kkimdev commented Jan 14, 2017

Btw, is there a documentation or an example on how to implement custom Future properly today?

@alexcrichton
Copy link
Member

@kkimdev yes you can find detailed docs and such at https://tokio.rs

@kkimdev
Copy link
Contributor

kkimdev commented Jan 15, 2017

@alexcrichton yeah I've read that. It's a very informative doc, though I think it's also good to have a section with a concrete example on implementing custom Future. I think I'd have to agree with @rjuse that the current Future trait API is unintuitive regarding the relationship with Task.

@alexcrichton
Copy link
Member

@kkimdev ah yes I believe we don't have a page on specifically that, no. Would be good to add one! Want to open an issue at https://github.com/tokio-rs/website?

@Kixunil
Copy link
Contributor

Kixunil commented Jan 24, 2017

I'd myself prefer explicit arguments too. I don't mind not having them if there are strong reasons but I'm not convinced right now. Maybe we should try harder designing it.

@dwrensha
Copy link
Contributor

I've opened a pull request to make the task parameters explicit. I think doing so makes a lot of things nicer!

My impression is that many people who try to learn futures-rs get tripped up on the deceptively simple interface to Future. They sense that there must some implicit magic happening, and such things shake their faith in the understandability of the library.

Indeed, when making the changes in #374, I got tripped up on the fact that Sink::start_send() can panic if not run on a task.

It seems that the main argument for making task implicit was that std::io::Read and std::io::Write could then be used directly by async I/O objects. However, as discussed in tokio-rs/tokio-core#61, that approach has problems. I think it would be better to just have:

trait PollRead {
      fn poll_read(&mut self, task: &Task, buf: &mut [u8]) -> Poll<usize, std::io::Error>;
}

trait PollWrite {
      fn poll_write(&mut self, task: &Task, buf: &[u8]) -> Poll<usize, std::io::Error>;
}

@aturon
Copy link
Member

aturon commented Jan 25, 2017

The bigger problem is wanting to hook Tokio into libraries that currently work with Read/Write-style APIs. IIRC, this was particularly a problem for curl -- @alexcrichton, can you spell out the details here?

That said, there may be some other avenue for addressing this problem.

@dwrensha, you say that making task explicit makes a lot of things nicer. Looking at that PR, from what I can see it mostly involves adding a significant amount of boilerplate across the board (passing the task through). While I do think there's some benefit to having the task explicit (in terms of making sure you're calling poll from the right context), I can't see how it makes the library nicer to work with. Can you elaborate?

@tailhook
Copy link
Contributor

@aturon the benefit I can see is that we don't need Async* types to mark something that schedules a wakeup. In some cases, it might make return value simpler (i.e. return Result, instead of Result<Async<..>>, use Option instead of Async) at the cost of adding a parameter.
(I don't have any opinion on whether #374 should be applied, yet, just mention my thoughts)

@aturon
Copy link
Member

aturon commented Jan 25, 2017

@Diggsey I agree with that reaction in general (and argued privately with @carllerche along similar lines), but I do think that for the special case of transports the actual I/O piece tends to naturally fit both models.

But again, there may be some way of more explicitly factoring the library to make this more clear, so that you don't have access to things like select which certainly don't work properly with synchronous operations.

@alexcrichton
Copy link
Member

The killer feature to me for why we switched to a thread local was what @carllerche mentioned, all existing I/O continues to work. If we were only implementing Future, Stream, and Sink with "future aware" semantics (e.g. "would block" == the task is parked) then I think we could consider this change, but we're also considering I/O work as "would block == the task is parked". We have a lot less control over existing I/O interfaces, and to me that necessitates a desire to interoperate well with rest of the ecosystem as-is.

My favorite example is TLS. We've got crates like native-tls, openssl, etc, and we surely don't want to reimplement any of it. All of these crates operate with Read and Write traits (as one might expect) and will "do the right thing" whenever they get a would block error. Note that this means that these crates are written with async in mind, in the sense that they provide async-compatible interfaces (e.g. MidHandshakeTlsStream) and such. With the current implementation we can drop all I/O objects into these libraries seamlessly. Not only that, but we know precisely what to block on!

My worry with a change such as passing the task around explicitly is that we'd have traits like @dwrensha mentioned above but they're incompatible with Read and Write. Note, though, that this is just a worry of mine, I don't know that it wouldn't work!

I personally feel that to evaluate a change as proposed in this issue we'll have to change more than the futures crate. Ideally I'd like to see the impact all the way up to tokio-proto and tokio-tls as I think that games out a lot of the scenarios that we have in mind. Note, though, that this is likely a lot of work, so I don't mind discussing things ahead of that!


In general though I personally have really enjoyed the ergonomics of not passing tasks around. I do sympathize with @dwrensha's original point though of that it feels quite magical. My hope was that we could have documentation to combat this mentality but it may either be too fundamental or mean that the docs aren't quite up to the task.

I'd personally draw an analogy with synchronous code, though, for comparison. During early init and late exit of libc surely you can't use pthread functions, but you can't tell from those function signatures. Now everyone doesn't tend to write code in those areas, but statically solving this problem would involve changing every function to have a handle to some "current thread" pointer. That way early init and late exit functions wouldn't have access to it, but all other code could access it.

Clearly this sounds super painful, and I fear that's what would happen with futures. Basically literally every function would need a Task argument, in which case it not only sounds more ergonomic to not pass it around but it could even just be more efficient. I'd personally harbor a great fear that the ergonomics would take a significant hit to pass around Task, but I'm also willing to be wrong!

@dwrensha you mention that the library gets much nicer passing Task around, and I would actually probably agree that the futures crate itself gets nicer with a change like this. I'm curious though if you've propagated this change further up the stack, perhaps into some application logic? I'm not sure if the wins would continue to hold there, and would love to dig into the use case. Also if you could elaborate on how you felt things got nicer that'd also be great!

@Diggsey
Copy link
Contributor

Diggsey commented Jan 25, 2017

My favorite example is TLS. We've got crates like native-tls, openssl, etc, and we surely don't want to reimplement any of it.

I don't see why re-implementation would be necessary, I see these possible scenarios:

  • A crate uses PollRead/PollWrite
    (great, we're done)
  • A crate uses Read/Write, but never causes wakeups itself
    In this case, "would block" actually means "I need you to give me more data, before I can give you a result". This contact is stated by wrapping them in a newtype (eg. NeverWaits(T) or something) and this newtype would implement PollRead/PollWrite accordingly. (TLS falls into this category)
  • A crate uses Read/Write, is asynchronous, but expects wakeups to come from somewhere else, or for constant polling to occur
    In this case, it's very important that they are wrapped correctly (eg. via tokio) or they will silently fail
  • A crate uses Read/Write, and is synchronous
    In this case, the only way to wrap them is by scheduling to a thread pool

@carllerche
Copy link
Member

@Diggsey The case is specifically native-tls decorates a T: Read + Write, but it doesn't care what that T is. We are able to use native-tls to decorate a T: Read + Write and because the task is stored in a thread local, the wake up system works. The fact is that it would be hard (impossible?) to thread the Task variable through function calls in such a way that it goes through native-tls code appropriately.

The insight is that most code doesn't care about the details that @Diggsey listed above me. It just defers to a lower level type.

Also, all the other points that @alexcrichton listed hold as well.

@Kixunil
Copy link
Contributor

Kixunil commented Jan 25, 2017

As far as I understand (I'm not sure), the problem is that with explicit handles e.g. tokio::net::TcpStream couldn't impl io::Read. I believe that it shouldn't. Instead, we should be able to call .as_sync(task) on it to get a wrapper which implements io::Read. This way we can safely pass async types to libraries which expect sync types. The advantage is that it's very explicit.

Also, is it so difficult to re-factor other libraries? Isn't there a better abstraction that is usable with both sync and async?

If I misunderstood something, please point me at concrete code.

@carllerche
Copy link
Member

@Kixunil io::Read and io::Write do not imply blocking sync. Those traits work fine w/ mio for example.

Also, is it so difficult to re-factor other libraries?

Yes, we do not control libraries like native-tls and I also have no intention of redoing all the work that @sfackler has put into those libs. Also, native-tls shouldn't have to depend on futures-rs.

@sfackler
Copy link
Member

I'm not totally sure what exactly is being discussed, but TLS crates definitely work with nonblocking sockets. The async Hyper branch did this directly for a while. It's a little bit complex since a TLS write can require reads and vice versa, but that was dealt with by wrapping the raw socket in one that recorded if the last call was a read or write. When the TLS stream returns a WouldBlock error, you check which operation it's waiting on and register with epoll or whatever.

@Kixunil
Copy link
Contributor

Kixunil commented Jan 25, 2017

@carllerche I'm worried by that. If io::Read/io::Write means that it could work with async (but there's no explicit requirement), then either everyone writes code with async in mind or some code will silently fail sometimes and it will be difficult to tell.

I believe, there should be at least marker trait for it. I don't think depending on some small crate with just (marker) traits is a huge issue.

Also the fact that we don't control other peoples crates doesn't mean that others are unfriendly and won't accept our PRs etc. And if there is a disagreement, there is still possibility of fork as a last resort.

@Diggsey
Copy link
Contributor

Diggsey commented Jan 25, 2017

@carllerche

The case is specifically native-tls decorates a T: Read + Write, but it doesn't care what that T is.

Basically, what @sfackler said: if a library can be used with a Read/Write implementation that might return "would block", then it must already have some way for the caller to know when it should try again, and that mechanism is what we should be using.

@alexcrichton
Copy link
Member

@Diggsey there's a significant difference, though. Today we know precisely if a library wants readability or writability. Otherwise if we just get back a plain "I need something else to happen" then we don't know precisely what's desired. I initially had to implement this for the three TLS libraries and it was a huge PITA. "falling out" of trait impls is far more ergonomic in my experience.

@Diggsey
Copy link
Contributor

Diggsey commented Jan 25, 2017

@alexcrichton That's a fair point about being easier to wrap, but if we have the primitives available in futures-rs, then the authors of libraries can implement eg. PollRead relatively easily, so this will only be necessary for a subset of crates where the author does not decide to do this.

I also think that saying "we know precisely if a library wants readability or writability" is not quite true - we're inferring based on what operations the library actually performs on the stream, which is OK in this case because superfluous unpark events are allowed.

The most important reason I don't like the current situation is still the "abuse" of the Read/Write traits to register unpark events, or more specifically, the user-visible abuse.

One option is to continue doing this internally to simplify creating futures-compatible wrappers of libraries, but use a task parameter in futures-rs, and have separate PollRead/PollWrite traits for all user-facing parts of the code. This works because you can always use a scoped thread-local to store the current task, even if it's not fundamental to futures-rs, while the reverse is not true. Having the task be a parameter is just objectively more flexible in this regard.

@Rufflewind
Copy link
Contributor

Would a wrapper like this work?

pub struct PollReader<'a, T> {
    pub task: &'a Task,
    pub inner: T,
}

impl<'a, T: PollRead> io::Read for PollReader<'a, T> {
    fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
        match self.inner.poll_read(self.task, buf) {
            Ok(Async::Ready(x)) => Ok(x),
            Ok(Async::NotReady) => Err(io::Error::from(
                io::ErrorKind::WouldBlock)),
            Err(e) => Err(e),
        }
    }
}

@leoyvens
Copy link
Contributor

leoyvens commented May 2, 2017

The proposed task overhaul in #436 does a lot of contortion around Unpark trait objects. If we did this, maybe we could make everything generic over the unpark type and avoid trait objects all together?

@casey
Copy link
Contributor

casey commented Oct 16, 2017

I don't want to add just another "me too!" comment, but making the task argument explicit would have solved almost all of my difficulties in learning tokio/futures.

I wrote a bunch of details of what I found hard here:

https://www.reddit.com/r/rust/comments/76t442/

And, in fact, I wound up implementing an alternate "extended" API which made the task an explicit argument to all methods that require them.

The Tokio task is analogous to the current thread in sync code. One does not pass along a thread handle throughout function calls in order to use that thread handle when blocking.

I don't think it is analogous. When you're in code, you're in a thread. It may be the only thread in the process, but it's still a thread. When you're dealing with a program that uses futures, you may or may not be in a task. When you read futures code as a beginner, it isn't clear at all what's going on.

I don't think that interoperability with blocking io is a good idea. To me, the two are different universes, and not drawing a clear boundary does not seem like a good idea.

Now, I might not be given enough weight to the benefits here, since I haven't done any of the work implementing everything. If interoperability with standard IO is really necessary, then a function with a scary name that has the same functionality as task::current() can be used to bridge the two worlds.

@gurry
Copy link

gurry commented Nov 1, 2017

Maybe I'm being naive, but can't a Future::poll() return something like a channel on which the caller can wait if he wishes? May be Async's NotReady variant can instead be NotReady(Channel). This should decouple the future from the caller completely (no need for implicit shared state between the two) and still keep poll() ergonomic (i.e. no parameter).

@Diggsey
Copy link
Contributor

Diggsey commented Nov 1, 2017

@gurry Not really without quite a bit of overhead: this would mean that futures would have to allocate a new channel each time they want to block, and then the executor would have to somehow select across all the channels that have ever been returned to it.

An efficient implementation requires that all futures send their notifications to something equivalent to a single channel, so that the executor need only wait on that one channel, and need not allocate a new one each time.

@gurry
Copy link

gurry commented Nov 3, 2017

@Diggsey Yup. Makes sense. Classic performance versus purity of design conflict.

@WaDelma
Copy link

WaDelma commented Dec 8, 2017

So what is the status of this? I am planning on using async IO and don't really want to use futures-rs without this change, but I still want futures abstraction. Should I just roll with my own trait?

@aturon
Copy link
Member

aturon commented Feb 5, 2018

If you have opinions on this issue, please comment on the RFC devoted to it.

@aturon
Copy link
Member

aturon commented Feb 12, 2018

This has now been accepted as a part of 0.2!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests