When the node is being stopped, we want do 2 things for a clean shutdown:
- Disconnect all peers
- Gracefully shutdown the
BackgroundProcessor
The background processor actually implements the Drop trait itself. This stops the thread in which the background processor is running (wrapped inside macro define_run_body!
). So there's no need to do anything from 3L's side, it can just wait for the BackgroundProcessor
to go out of scope.
On the contrary, function peer_manager.disconnect_all_peers()
is never being called. However, when I was running tests locally and I was checking on the connected LND node, the 3L example node was actually anyways being disconnected from LND as soon as the 3L example node stopped running. I couldn't figure out how this disconnecting was initiated, but maybe it was just the LND node that disconnected the peer, once the TCP connection disconnected.
Anyhow, it feels kind of like the right thing to do to call peer_manager.disconnect_all_peers()
when the node shuts down, and there doesn't seem to be any big drawback even if it would turn out that this step was actually not necessary.