﻿id	summary	reporter	owner	description	type	status	priority	milestone	component	version	resolution	keywords	cc
185	test failures against Twisted-11.1.0	Brian Warner		"I'm seeing some occasional test failures when the test suite is run against Twisted-11.1.0 (released a few weeks ago). Apparently we've been cheating for a while, and something in Twisted finally changed to make the problem surface.

When {{{self.transport.loseConnection()}}} is called, we're supposed to wait for the subsequent {{{.connectionLost(why)}}} to fire before the TCP connection is really gone. But most of the time, the connection is actually dropped by the time {{{loseConnection}}} returns. The code in {{{Broker.shutdown()}}} was assuming that {{{loseConnection}}} was enough. In particular, {{{Tub.stopService}}}'s Deferred would fire as soon as the last {{{loseConnection}}} was run, instead of waiting for the last {{{connectionLost}}} to be called. So test cases (which wait for stopService) are ending too early.

Twisted-11.1 overhauled the TLS implementation, and one of the results is that sometimes the TCP connection is still alive after {{{loseConnection}}} returns. So the tests sometimes fail with a dirty-reactor error. Adding extra fireEventually turns won't help, since the TLS machinery isn't using fireEventually. A fixed time delay might help, but those are gross and either inefficient or fragile.

The real fix is to stall stopService until all {{{connectionLost}}}s have fired. My first attempt at this (changing {{{Broker.shutdown}}} to not call {{{self.finish}}} itself, but leave that to {{{connectionLost}}}) seems to have fixed that problem, but raised another: {{{shutdown}}} is also called by the duplicate-connection-management code, and the call from {{{evaluateNegotiationVersion1}}} (when we accept the new offer and drop the existing connection) is expecting to have removed the tubid from the parent Tub by the time it returns. Since {{{shutdown}}} is now async, the negotiation process would need to be rewritten.

I'm hoping there's some hybrid approach that would be easier than asynchrifying the 176-line negotiation blob. I'd like the tubid to be removed from the Tub synchronously, all of the shutdown/cleanup to be done synchronously, but just keep stopService from firing until after the connectionLost fires.

Maybe use notifyOnDisconnect to inform the Tub about the departure, and have the Tub maintain a counter (not a dict) of Brokers.

(in the long run, it might be easier to allow multiple simultaneous TCP connections, not trying to manually close them, but designate only one of them as active, and do all the usual notifications when the designation changes)
"	defect	closed	major	0.6.3	negotiation	0.6.1	fixed		
