Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider supporting Reactive Streams and Reactive Sockets #110

Closed
drewhk opened this issue Aug 29, 2016 · 27 comments
Closed

Consider supporting Reactive Streams and Reactive Sockets #110

drewhk opened this issue Aug 29, 2016 · 27 comments

Comments

@drewhk
Copy link

drewhk commented Aug 29, 2016

I am one of the developers of https://github.com/akka/akka/ and I just stumbled upon this nice library (I am a Rust lurker mostly). Futures are a very nice building block for asynchronous programs, but eventually one reaches the point where some kind of streaming abstraction is needed. The simplest streaming approach is the RX-style chained Observables, however, as we and others found out, with asynchronous call-chains backpressure becomes an issue as blocking is no longer available to throttle a producer. To solve this issue, the reactive-streams (RS) standard has been created: http://www.reactive-streams.org backed by various companies interested in the JVM landscape. This set of interoperability interfaces was designed by multiple teams together. This standard is also on its way to become part of JDK9. There is also an effort to expose its semantics as a wire-level protocol http://reactivesocket.io which nicely completements the RS standard (which mainly focuses on in-JVM asynchronous, ordered, backpressured communications).

Since I imagine that eventually the need for asynchronous streams will arise here, I think these standards can be interesting for Rust, too. While RS might not be perfect, it was a result of a long design process and now has mostly consensus about it in the JVM land, so it would be nice to see a Rust implementation that is similar enough to be easily connectable to JVMs, maybe via reactive-socket.

Sorry for the shameless plug :)

@carllerche
Copy link
Member

Hi,

Thanks for the explanation. futures-rs already has a Stream abstraction (https://github.com/alexcrichton/futures-rs/blob/master/src/stream/mod.rs#L79-L98). Prior art was, including the reactive stream effort was looked at in depth and helped influence Stream here.

I believe that there is a fundamental difference between how this library handles asynchronous data flow vs. existing libs that make it quite adept at handling back pressure issues. This library is built around a pull model vs. a push model. Every node in the computation graph polls from its dependencies when it is ready to process more data. This means that producers are unable to overload downstream components as they never provide more data than is ready to be processed.

I would be interested in your opinion on this model as you probably have more experience using the reactive stream abstraction. (I've only used it at a surface level in the past, never for anything really extensive).

@drewhk
Copy link
Author

drewhk commented Aug 29, 2016

RS is more like push-pull. I will answer in more detail, but I don't yet know how your Stream looks like, plus I gotta go now, but I will look at these.

@alexcrichton
Copy link
Member

Thanks for the report @drewhk! As @carllerche mentioned the Stream trait seems like it'd be a great fit here, and I'd also be very curious as to whether there were some combinators or various pieces to soup up here and there. I believe #52 may also be a related issue.

@drewhk
Copy link
Author

drewhk commented Aug 30, 2016

I believe that there is a fundamental difference between how this library handles asynchronous data flow vs. existing libs that make it quite adept at handling back pressure issues. This library is built around a pull model vs. a push model.

I think you are referring to the Rx model. I.e. Observables and friends. What I refrer to is called Reactive Streams (RS) and it is not a push, neither a pull model, rather, an asynchronous push-pull model. In fact, I don't really like to think about it as push or pull, as any backpressured system where concurrency is involved is basically a flow of advertisements of empty buffer space in one direction and flow of elements in the other direction, probably overlapping. I.e. send-then-wait-ack, send-and-ack-with-window, req-response, req-batch-response-with-window are all the same patterns of a closed cycle where number of elements buffered plus in-flight equals the in-flight advertised space plus elements consumed since advertisement. RS is a permit based model, so consumers advertise the number of elements they are currently willing to accept, which is in turn replied to with at most that number of elements. Rx only has the onNext signal, and if that passes through a thread-boundary then backpressure is lost as blocking the calling thread is no longer possible directly, nor is desirable on a thread-pool based system. RS was built by users of RX as a response to those limitations, so I recommend to take a look at it :) There are 3 major implementations already (JavaRX 1-2 by Netflix, Reactor by Pivotal and Akka Streams by us, Lightbend) and JDK will standardize the interfaces.

The interfaces themselves are deceptively simple so if you look at them they seem like a bit "meh": https://github.com/reactive-streams/reactive-streams-jvm/tree/master/api/src/main/java/org/reactivestreams
In fact I would say deceptively simple. Where the meat is is the actual spec: https://github.com/reactive-streams/reactive-streams-jvm/#specification

Those rules there are the result of almost 2 years of refinement with feedback from various parties, so it is pretty solid. It is also backed by a rather extensive TCK: https://github.com/reactive-streams/reactive-streams-jvm/tree/master/tck

As for the linked Stream abstraction, I am not sure if it is the same. I see it has a poll method, which, while can encode an asynchronous result, forces someone to spin over it (if I understood what is going on there at all, correct me if I am wrong). What the RS spec does is to define callbacks instead so a thread-pool backed implementation possible where idle consumers/producers do to lack of elements/demand are simply not executed on the pool until a wakeup signal arrives. I am not sure what are your goals here, I obviously won't say one approach is strictly better than the other one (this always depends on external factors), but at least take a look at the space for possible inspiration.

From someone who loves streaming and Rust :)

@drewhk
Copy link
Author

drewhk commented Aug 30, 2016

I'd also be very curious as to whether there were some combinators or various pieces to soup up here and there.

A somewhat shameless plug is the combinators in Akka Streams: http://doc.akka.io/docs/akka/2.4/scala/stream/stages-overview.html

While the API surface in streams land is highly opinionated, I think the above link is a good start nevertheless as we are on the conservative side of adding operators, i.e. we tried to keep everything to the minimum and focus on simple extensibility instead. I.e. the only reason I linked the above page is so you can roughly see what turned out to be our minimal "essential" set to get a good initial feel, not because it is scripture in any way :)

If you would ask me what is the single, most important combinator apart from the usual suspects of map, filter, fold, then my vote would go to mapAsync which consumes elements of type A and emits elements of type B. It takes two parameters

  • a function from A -> Future[B]. The Futures will be automatically flattened and the output emitted in the original sequence of the arrival of elements of type A
  • a parallelism factor. This controls how many Futures can be in flight (parallel/concurrent) at the same time. This op buffers results that come from Futures that complete before their turn (no reordering).

It has a brother mapAsyncUnordered which has the same signature but does not keep ordreing between elements.

@carllerche
Copy link
Member

As for the linked Stream abstraction, I am not sure if it is the same. I see it has a poll method, which, while can encode an asynchronous result, forces someone to spin over it (if I understood what is going on there at all, correct me if I am wrong).

Yes, the consumer is expected to poll in a loop, however the specific strategy of doing this is decoupled from the Future & Stream trait. The expectation is that scheduling logic is implemented in the leaf nodes of the computation (where combinators are branches). One reason for this is that it makes the traits as generic as possible, allowing them to work w/ no_std or other environments.

The default scheduling strategy is a task based park / unpark system: https://github.com/alexcrichton/futures-rs/blob/master/src/stream/channel.rs#L86-L89. The specifics are still being iterated on, but the general idea is that if a consumer calls poll and no value is ready to be produced, the consumer has expressed interest in the completion of the future / stream. So, in poll at the leaf nodes, the logic is to notify the task when the stream value becomes ready... so the task can go to sleep until there is work to be done (aka, avoid spinning). Hopefully this makes sense.

The exact combinators for this library are still being figured out / developed, but I believe that the equivalent to mapAsync that you mentioned would be a map followed by buffered. map only processes one result at a time because it has no buffer space. map is a zero cost combinator. There are no allocations or resources created when you call map. It optimizes down to writing the code by hand, which is nice....

@drewhk
Copy link
Author

drewhk commented Aug 30, 2016

Yes, the consumer is expected to poll in a loop, however the specific strategy of doing this is decoupled from the Future & Stream trait.

That is fine, this is true for scala Futures as well and even Akka Streams, so this I can totally relate to :)

The expectation is that scheduling logic is implemented in the leaf nodes of the computation (where combinators are branches). One reason for this is that it makes the traits as generic as possible, allowing them to work w/ no_std or other environments.

Does this exclude cycles (directed or undirected) to be implemented? I guess not, just curious.

The default scheduling strategy is a task based park / unpark system: https://github.com/alexcrichton/futures-rs/blob/master/src/stream/channel.rs#L86-L89. The specifics are still being iterated on, but the general idea is that if a consumer calls poll and no value is ready to be produced, the consumer has expressed interest in the completion of the future / stream. So, in poll at the leaf nodes, the logic is to notify the task when the stream value becomes ready... so the task can go to sleep until there is work to be done (aka, avoid spinning). Hopefully this makes sense.

This means basically that you block the pollers thread. I.e. this is a blocking scheduler. My basic issue with poll (or its dual offer, which can implement your Stream trait in a similar way, having backpressure, too, but driven by the producers not the consumers) based APIs that it either forces one to block or spin with possibly a backoff. Anyway, it is a valid approach and can be relatively simple but I hoped for a non-blocking non-spinning implementation.

he exact combinators for this library are still being figured out / developed, but I believe that the equivalent to mapAsync that you mentioned would be a map followed by buffered.

Well, that is not the same. You cannot (so easily) separate the two steps. Unless I am mistaken and they are implemented differently here, Futures express results of computations/results that do not impede progress of the callers thread (i.e. concurrent computations/results). The only reason why mapAsync acts as a buffer is because it is able to launch multiple functions returning Futures in one go, i.e. call a remote webservice to enrich elements of a stream, making 4 requests in parallel. The buffer is needed so if the 4th element's Future finishes first, you need to put the result somewhere until elements 1, 2, 3 complete and emitted (otherwise ordering would be violated). What you describe would be equivalent in akka to mapAsync(1)(makeRequest).buffer(16, OverflowStrategy.backpressure) but that would do only 1 makeRequest call at a time (remember, request is A => Future[B] but the result of the stream is B, so there is a flattening step). That is totally different from mapAsync(16)(makeRequest).

map only processes one result at a time because it has no buffer space. map is a zero cost combinator. There are no allocations or resources created when you call map. It optimizes down to writing the code by hand, which is nice....

I have a feeling that we talk past each other as I never implied that map needs a buffer :) Map can be implemented as a zero cost combinator also in a fully asynchronous push-pull model, but that is besides the point.

Anyway, I see that you already had come up with ideas and an architecture you like so I don't want to sidetrack anything here, just wanted to share my experiences :)

@drewhk drewhk closed this as completed Aug 30, 2016
@dwrensha
Copy link
Contributor

Note that futures-rs's buffered() is quite different from akka's buffer(). The former can only be applied to a stream of futures (or more precisely IntoFutures), while as I understand it the latter can be applied to any stream.

@aturon
Copy link
Member

aturon commented Aug 31, 2016

@dwrensha Yeah -- we haven't explored buffering combinators very thoroughly, and personally I'm interested in something that can apply to an arbitrary stream, assuming we can make sense of it for our Stream design.

@drewhk
Copy link
Author

drewhk commented Aug 31, 2016

Note that futures-rs's buffered() is quite different from akka's buffer(). The former can only be applied to a stream of futures (or more precisely IntoFutures), while as I understand it the latter can be applied to any stream.

Ah ok, my bad, I assumed the behavior from the name without looking. Yeah that basically makes buffered similar to mapAsync just not including the mapping step only the flattening (we do the opposite and express the flattening if needed as mapAsync(identity). That makes total sense. Our buffer is something else as it is really just a buffer, i.e. ordered container for a set of elements and its use only makes sense if there is an asynchronous boundary somewhere (i.e. there is an upstream or downstream of the buffer that can make independent, concurrent progress).

@alexcrichton
Copy link
Member

@drewhk note that we're still very interested to learn from any experiences you've had! There may be a bit of an impedance gap as we may not understand akka very thoroughly, but new kinds of combinators or ways to use streams is something we're always looking to explore!

So far it seems like they're both achieving very similar goals and you could conceptually transition between the two systems with ease, although I could be wrong!

@ksf
Copy link

ksf commented Aug 31, 2016

basically a flow of advertisements of empty buffer space in one direction and flow of elements in the other direction

This sounds like it could rather easily support splice: A sink could make an advertisement of the sorts "call this to stream data from a FD you have, or you could write to this other buffer here if your data already is in userspace memory" (which then would be part of a circular buffer that's getting vmspliced into kernel memory). Pedestrian fallbacks for systems that aren't Linux could be transparent. With suitable squinting, sendfile also fits into the general idea.

@LalitMaganti
Copy link

@alexcrichton if you're not too familiar with Akka but know Java, I'd strongly suggest looking at RxJava 2.0 which is also a ReactiveSteams compliant library and touches on much of the principles discussed in this thread.

@drewhk
Copy link
Author

drewhk commented Sep 1, 2016

if you're not too familiar with Akka but know Java, I'd strongly suggest looking at RxJava 2.0 which is also a ReactiveSteams compliant library and touches on much of the principles discussed in this thread.

There is also Reactor from Pivotal which is the other major implementation, also in Java. Anyway, I think the RS spec is the very first place to look if there is interest.

@drewhk
Copy link
Author

drewhk commented Sep 1, 2016

@drewhk note that we're still very interested to learn from any experiences you've had!

Ok, if I have time today I will try to distill major design junctions I am aware of (and try to keep it clean from my RS specific experiences).

@aturon
Copy link
Member

aturon commented Sep 1, 2016

This means basically that you block the pollers thread. I.e. this is a blocking scheduler.

That's incorrect. What you block is a task, which is essentially a lightweight/green thread. More broadly, futures/streams are always executed in the context of some task which is responsible for making progress by polling them. Tasks themselves are scheduled onto event loops and thread pools. So "blocking" a task just means that a worker thread is now free to work on a different task.

@drewhk
Copy link
Author

drewhk commented Sep 1, 2016

What you block is a task, which is essentially a lightweight/green thread.

Ok, that is interesting. I just looked at poll() and my thinking was:

  • poll either returns "something", i.e. an element or a completion event, but then it needs to block (well, it can spin a little before it blocks)
  • poll can return that it has nothing, but then it forces the caller to do something about it, likely to poll again with some schedule

I see though that you have some other means of "suspending" the caller. On the JVM we have no such thing (ok, there are macros in Scala or bytecode rewriting in general to transform the sequence into continuations) and hence every RS implementation is constructed around callbacks, which when return give a chance for the execution environment to step in and take away execution from the called entity and schedule something else. How do you do the suspension? In other words, when someone calls poll() and hence signals interest, how do you take back control on that thread to use for something else?

@alexcrichton
Copy link
Member

How do you do the suspension? In other words, when someone calls poll() and hence signals interest, how do you take back control on that thread to use for something else?

For us there's always a "task" which is driving a future. This task is persistent for the entire lifetime of a future, and the internal state of the future may transition between many different futures at that point. You can think of it sort of along the lines of one task per TCP connection (in a sense) where that task drives the entire request to completion -- reads the request, parses it, fires off database queries, renders the response, sends it.

When a future decides that it needs to block, it calls the task::park function which gives it a handle to wake up at a later date. This handle has one method, unpark, which can be invoked when the task is ready to make progress. The future, however, immediately returns NotReady which will end up getting propagated upwards. If it goes all the way to the outside (e.g. the event loop), then the event loop has already arranged itself to get notified when the future is ready to make progress (e.g. internally task::park was called). When a notification is arrived, the event loop then knows to poll the right future.

So in that sense we don't literally suspend an OS thread or anything like that, we just suspend that particular future by returning up the stack. Later on we then poll the future again and the state machine of the future will handle getting back to the same point.

@drewhk
Copy link
Author

drewhk commented Sep 2, 2016

(I don't want to abuse your ticket system for discussions, so we can move this elsewhere, but I am really interested in what you do so I would be happy to continue! Just tell me where it is more appropriate.)

Ok, so I think I understand now. I try to summarize my understanding:

  • You have Tasks that are scheduled and executed by a pool of threads
  • once a Task is grabbed by a worker thread, some entry point method is called on the Task
  • inside the entry point, somewhere poll is called. If there is an element ready, it just returns the element immediately.
  • if poll returns empty, park() is called (inside poll()), which does not block the thread but instead registers interest for the Task on the event "new element available".
  • eventually the Task returns from its entry point method to the caller worker thread.
  • the threadpool/scheduler takes note that the Task has some pending interest, and once it is fulfilled, it will schedule the Task again

Is this correct?

@alexcrichton
Copy link
Member

Ah no worries! Your summary is indeed correct! The exact specifics about when/where tasks are polled (e.g. by which thread) are largely left up to the user as well, none of it's baked into this library itself.

Generally, though, a future is spawned into one "executor". For example the CpuPool provided by this crate is how to spawn futures into a worker thread pool. The tokio-core crate, however, allows spawning futures directly into the event loop for doing I/O and such.

In general, though, your summary is accurate!

@drewhk
Copy link
Author

drewhk commented Sep 2, 2016

Ok, so now I can explain some ideas in terms of your model since we are now on the same page :) (Disclaimer: I am obviously biased towards what I have done before (I rewrote our execution engine 4 times already in the past 2 years and I have also seen other libraries) but I try to not bias you as much as possible since domains are different, so when I refer to Akka that is just because that is what I have as easy reference)

First, all RS implementations share one property in common, namely that all of them allows to separate your streaming pipeline into potentially asynchronous segments. This means that in your terminology, different Tasks might host different segments of your pipeline. For example, a map stage might have an asynchronous boundary (Task boundary) towards its upstream and downstream, so when it processes element N, its upstream might process already element N + B1 while its downstream might still process element N - B2 where B1 and B2 are the sizes of buffers at the input side of the map and the input side of its downstream, respectively. These boundaries are optional in all major RS implementations, and the main goal is to gain performance where pipelining computations makes sense, for example pipelining a costly deserialization step with a costly domain specific computation. Where RS itself comes to the picture is that it is a standard set of interfaces that all of these libraries are able to expose and interoperate with another library implementing the same standard. So you can simply pipe a JavaRX2 stream into a Reactor stream and it will work and obey backpressure, even though the underlying execution machinery and DSLs/APIs for the two library might be very different.

As for the poll API we discussed here, we had a somewhat similar approach back then which we dropped eventually. As an example how that looked like I give you a map in pseudocode (forget zero-costness for a moment, this is just a dummy example):

when(hasInputElement && outputIsReady) do {
   pushElement(f(pollElement()))
}

Here, the events of interest are expressed in the when block, whose body will execute once the condition becomes true (this is tracked by the underlying engine, which might even loop on this method if there are buffered elements from upstream and buffered demand from downstream). This is fine, but things get a bit more dirty when you for example try to represent a buffer that is not full nor empty:

when(hasInputElement || outputIsReady) do {
  if (hasInputElement) enqueueToBuffer(pollElement())
  if (outputIsReady) pushElement(dequeueFromBuffer())
}

The main issue with this approach was, (as the above example hints at) when you were interested in multiple events, since there was only one entry point, so you ended up decoding to more fine-grained events all the time. While the above example might feel forced, but once fan-in and fan-out stages come into the picture (multiplexing/demultiplexing from/to multiple streams if you like) plus various completion events from the different streams you have served, these patterns became the norm, not the exception.

What we ended up with as a programming model, expressed in terms of your poll() style approach is roughly the following:

  • instead of poll() returning an element, it never returns one, instead, you pass it a callback poll(elem -> onElement). If the element is already available it will "immediately" (more on this later) call the callback with the element
  • you also have an offer(elem, x -> onReady) which will pass the element downstream and call the provided callback once the next element is ready to be sent again.
  • the guarantees that we have is that no callbacks are reentrant or concurrent. I.e. if you have both an onReady and onElement in flight, the system will linearize the two events and will execute one callback at a time. It will also handle the case where the element is already available and you have called poll() from a callback: it will only invoke the callback once the caller returned from the previous one.
  • on top of this, the stage itself can access from the callbacks all of its mutable state without any synchronization as the environment takes care of memory visibility even though the stage might be executed on different worker threads over time. You basically own the mutable state in the stage and can do whatever you want

(Please note that the above is just an analogy, it is not exactly how it works and the above would be likely a horror API in practice, I just wanted to express the ideas in terms of poll/offer to keep the discussion simple.)

In general, I think the following questions are worth to consider:

  • do you want to support asynchronous boundaries between stream segments
  • what kind of completion events, error handling do you intend to provide (both upstream and downstream)
  • do you want to support fan-in/fan-out streaming
  • if you do, do you want to support directed or undirected cycles (deadlocks will enter the picture with either of these)
  • do you intend to support stream-of-streams? (i.e. streams streaming other streams). Certain combinations here again are prone to deadlocks

Lastly, depending on your answers for the above, the following basic set of streaming operations are what I usually try out first if I try something new given that they exercise almost all the patterns I have encountered (not all of these makes sense depending on the answers above):

  • map (doh)
  • filter or any other n-to-1 stage
  • flattenIterator (transform an upstream elem to multiple downstream elems) or any other 1-to-n stage
  • dropIfFaster, which is an n-to-1 stage, but only drops if the downstream is not fast enough, otherwise it just transfers elements unmodified (this is an example of a stage that we call detached. In this form it is not that useful but it is an archetype of similar operations)
  • repeatIfSlower, which is a 1-to-n stage, which repeats the last upstream element if upstream cannot keep up with the downstream, otherwise it just passes elements undefined (again, not super useful, but it is the dual of dropIfFaster)
  • merge multiple streams in first-come-first-served fashion
  • its dual, balance, which emits an element to the first available donwstream consumer
  • zip, which can only emit if all upstreams provided an element
  • its dual, broadcast, which can only emit if all downstreams are available (drop can be added on separately)
  • flattenMerge/flattenConcat, which takes a stream of streams and flattens it into a stream of elements combined from all of the received streams.
  • groupBy(x -> k), which takes a stream and turns into a stream of streams grouped by the "key" returned by the passed function for each element, i.e. each emitted stream will give you elements for that group. (groupBy.flattenConcat is the typical example of a deadlock you can encounter here, this is why I mentioned these two)

Sorry for the long post, but I hope it is helpful.

@LalitMaganti
Copy link

I think I have a working prototype for publish based on exactly what you described @drewhk. I'll attempt to post it later today.

@drewhk
Copy link
Author

drewhk commented Sep 2, 2016

There is no need to rush to any conclusions, I don't want to propose any kind of API or a particular solution, I just wanted to share some experiences.

@aturon
Copy link
Member

aturon commented Sep 2, 2016

@drewhk Thanks so much for the detailed thoughts! One thing I did want to mention re: multiplexing and large fan-in is that we have a way to communicate to a task, on wakeup, what woke it up: with_unpark_event. We sometimes think of this as "epoll for everyone". So if you're writing a mutiplexed RPC dispatcher, it can use this mechanism to determine precisely what happened without having to go re-scan for readiness. I'm not sure to what extent this addresses the concern you were raising.

The example operations at the end of your comment are super useful, thanks for those! We've thought through or implemented many of them, but we should systematically work through the list.

I will say that in general, I agree that you don't want a purely push or pull model. In our world, tasks provide the main source of "initiative", by always trying to make forward progress on their underlying future. In some cases, that might involve things like sending on a channel, which can "block" the task if the other side isn't ready for data. So I think we have the needed building blocks to express a wide range of patterns of backpressure etc.

I haven't thought about cyclic streams, however. Can you elaborate on that with a concrete example?

@drewhk
Copy link
Author

drewhk commented Sep 2, 2016

We sometimes think of this as "epoll for everyone". So if you're writing a mutiplexed RPC dispatcher, it can use this mechanism to determine precisely what happened without having to go re-scan for readiness. I'm not sure to what extent this addresses the concern you were raising.

I think what you say is similar to what I mentioned. I would not say that I will love an "epoll-style" API but I guess it does the job :)

The example operations at the end of your comment are super useful, thanks for those! We've thought through or implemented many of them, but we should systematically work through the list.

I listed them not as much because of their usefulness (some of them are, others aren't) but they host common implementations patterns, so implementing them is a really good exercise for an engine.

Btw, where can I look for code? I guess at this point my speculative approach is totally useless and it is better to just look at the code and give some advice if there is any applicable.

I will say that in general, I agree that you don't want a purely push or pull model. In our world, tasks provide the main source of "initiative", by always trying to make forward progress on their underlying future.

I probably need to wrap my head around your futures because the above sentence makes little sense from my Scala background :)

So I think we have the needed building blocks to express a wide range of patterns of backpressure etc.

I think so. I will try to look in more detail in the code. Where can I look for the current implemented patterns/operators for streams?

I haven't thought about cyclic streams, however. Can you elaborate on that with a concrete example?

On cycles or deadlocks? We have a section in our doc explaining a few deadlock scenarios: http://doc.akka.io/docs/akka/2.4/scala/stream/stream-graphs.html#Graph_cycles__liveness_and_deadlocks
There are much more actually, and not all of them needs a feedback cycle (!). I will probably write a blog post about them at some point since it is a recurring issue.

@aturon
Copy link
Member

aturon commented Sep 2, 2016

I listed them not as much because of their usefulness (some of them are, others aren't) but they host common implementations patterns, so implementing them is a really good exercise for an engine.

Yeah, that's what I meant by useful :)

@He-Pin
Copy link

He-Pin commented Jan 4, 2019

Hope rust community will join this party soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants