Opened 9 years ago

Closed 9 years ago

#239 closed defect (fixed)

remove shared Listeners

Reported by: Brian Warner Owned by:
Priority: major Milestone: 0.9.0
Component: negotiation Version: 0.7.0
Keywords: Cc:

Description

While working on the #203 Endpoint-ification of Listeners, I discovered a bug in the code that allows Listeners and Tubs to have a dynamic many-to-many relationship. I don't think it can be fixed, so the simplest solution is probably to limit this feature and make the relationship less dynamic.

The Tub is a cryptographic identity, and a collection of object references. Each RemoteReference you hold (to some remote object) will live in exactly one of your Tubs, and points to exactly one remote Tub. The Tub provides a TubID (a hash of the Tub's TLS certificate).

The Listener is a server that can accept connections. It is associated with a set of "connection hints", which identify transport protocols (like TCP), hostnames, port numbers, and whatever else is necessary to establish a stream-like connection to the server. Listeners are *not* cryptographically identified: you can never be sure you're connected to the "right" Listener, and the hostname/port details are just hints.

Objects are described with FURLs that look like:

pb://TubID@Hint1,Hint2/Swissnum

and when we try to use the FURL, if we don't already have a valid connection to the Tub identified by TubID, then we establish connections to every hint in the FURL, ask to talk to the given Tub, then check the TLS certificate to make sure it matches the TubID.

Each Tub can have 0 or more Listeners. A Listener can be shared between one or more Tubs. When a client first connects to a Listener, it asks to talk to a specific TubID (by the path included in the HTTP "GET" command).

The original purpose for sharing Listeners was to enable a dynamic forwarding or "relay" service: a Listener with a public IP address could forward data to clients that are trapped behind a NAT box. As these NAT-bound clients connect, the relay adds their Tubs to the Listener. (except that this wouldn't have worked: these relays need to exchange ciphertext, so the relay-client's Tubs don't even live on the relay node). Ok, maybe it just seemed like a good idea at the time.

I decided to make the mapping pretty dynamic, allowing apps to both add and remove Listeners from Tubs. The problem is that Listeners (as well as Tubs) are Services (twisted.application.service.MultiService, to be precise), which means they are connected in a tree (with parent-child links), and are either Running or Not Running at any given time. Tubs are set running (usually by parenting them to the application's top-level MultiService? object), and Listeners start running when they're attached to a Tub.

The troublesome corner case that this causes is where a Listener is passed from TubA to TubB. In particular:

  • create TubA, TubB, and Listener1
  • start both TubA and TubB
  • TubA.listenOn(Listener1)
  • TubB.listenOn(Listener1)
  • TubA.stopListeningOn(Listener1)

The listener is parented to TubA, even when it's shared with TubB (the listener knows about both tubs, so it can forward connections to both, but its serviceParent is TubA). But when TubA is shut down, the Listener gets reparented to TubB. This is the problem, because there's no way to reparent a Service without shutting it down first.

The specification of IService claims that it will throw a RuntimeError if you try to call setServiceParent() on a Service that already has a parent. However the implementation just silently calls disownServiceParent() on the old parent first. Unfortunately, disownServiceParent() is allowed to return a Deferred, and foolscap Listeners use this Deferred to indicate that they've shut down the TCP socket. By silently calling disownServiceParent(), this Deferred is discarded, which can obviously cause synchronization problems.

So a Listener which uses a specific TCP port (e.g. Tub.listenOn("tcp:12345")) can't be transferred in the current implementation, because the new owner tries to start the service back up before it finished shutting down. You get an EADDRINUSE error.

Fixing stopListeningOn() to disownServiceParent() itself first (and waiting for the Deferred before setting the new parent) causes a different problem: a Listener which uses a dynamic port (listenOn("tcp:0")) will allocate a new port when it starts back up. So the remaining Tub will find it self listening on a different port, and its existing FURLs will be invalid.

Calling TubA.stopService() is a problem too.

I'm still trying to figure out how to deal with this, but I think I need to remove some of the dynamicism. It might be enough to remove Tub.stopListeningOn, but I might need to remove the whole shared-Listeners thing entirely. Or maybe require that if you're going to do it that way, then the Listeners must be started/stopped independently of the Tubs, and not try to reparent them all the time.

Change History (3)

comment:1 Changed 9 years ago by Brian Warner

After some thought, I've decided to get rid of the feature of sharing a Listener between multiple Tubs. Going forward, each Tub will be associated with 0 or more Listeners, and each Listener will be associated with 0 or 1 Tubs (usually 1, but a Tub-less Listener might be used for redirect/relay).

This sharing was previously arranged by passing a Listener into listenOn, like this:

listener1 = tubA.listenOn("tcp:12345")
tubB.listenOn(listener1)

So the API change will be to forbid passing a Listener object into Tub.listenOn(). listenOn will now only accept strings.

There are three features that might be related to this sharing which haven't been implemented yet: relay, redirect, and http-sharing. There's still room for all of them.

For relay (#46), Listeners will have some sort of registerRelay(tubid, listener_rref) method. When clients do a GET that points to the target tubid, the Listener (in particular the Negotiation object it creates) will switch to a proxy protocol that accepts socket-level data (proto.dataReceived(data)) and delivers it as foolscap messages (rref.callRemote("write", data)). Notably, this does *not* involve a local Tub.

For redirect, the Negotiation object will have a map of tubid -> newHints, and will respond to the GET with a 301-ish HTTP "redirect" directive, which will cause the originating Tub to add the new hints to the list of hints being tried. The redirecting Tub then drops the connection. No local Tub is necessary either.

HTTP-sharing (#237) would involve a Listener that can accept either plain HTTP traffic, or foolscap Tub connections. The initial GET is used to distinguish between them: a GET of /id/TUBID along with two extra headers (Connection: Upgrade and Upgrade: TLS/1.0) indicates a Tub connection, and anything else is a plain web fetch. This requires a local Tub, plus some other configuration for the web space.

The Listener might service a single local Tub in addition to some number of relays or redirections, and/or HTTP-sharing. Or there may be no local Tub at all.

To use a single local Tub (plus other sharing), the API might look like:

t = Tub()
l = t.listenOn("tcp:12345")
l.addRelay(tubid_B, rref_B)
l.addRedirect(tubid_C, hints_C)
l.addHTTPServer(twisted.web.server.Site(root))
t.startService()

To build a Tub-less Listener, you'd instead do:

l = Listener("tcp:12345")
l.addRelay(tubid_B, rref_B)
l.addRedirect(tubid_C, hints_C)
l.addHTTPServer(twisted.web.server.Site(root))
l.startService()

Note that a Tub-less Listener must be started directly, whereas a Tub-full one is started by virtue of being parented to the Tub.

comment:2 Changed 9 years ago by Brian Warner

This change also means we get rid of Tub.clone(), since it was specified to create a new Tub that shares all Listeners with the original one.

comment:3 Changed 9 years ago by Brian Warner

Resolution: fixed
Status: newclosed
Summary: deprecate Tub.stopListeningOnremove shared Listeners

This was fixed in [d30ad5160].

Incidentally, Tub.stopListeningOn() is still present: removing the shared-Listener feature was enough to resolve the bug. Updating the title to match.

Note: See TracTickets for help on using tickets.