[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Aris - Response to your threading model question


Your blog is broken, and I don't know if you still need this, but here
is my response to your open question on your blog.  Each time I tried to
submit this as a comment it told me it was a 'duplicate comment.'


The question I have is why anyone sane would design their app in the way
you're describing.  I am planning to implement the socket back end of
libssh 0.5 using boost::asio.  This allows for multiple threads to read
the socket like you describe, but it also allows for what it calls
'strands'.  Strands guarantee that only one thread can be handling
events with the same strand.

Asio works by having an i/o service that can handle almost any kind of
event (including a simple callback), and allowing the client programmer
to 'give' or 'lend' threads to processing said events.  The number of
threads you provide to handle this processing is up to you, but any
thread that you give to asio can handle any event that asio is managing
(can handle the timer firing, socket read or write, etc).

So for me this is how I would use libssh with asio:

- Register async socket read event with io_service.
- io_service calls back when some data has been read.
- Register another async socket read event with io_service.
- Invoke the ProcessPacket in the strand specific to the ssh session
- (done)

If a second thread receives data from the async read, then the act of
invoking ProcessPacket in the strand specific to the ssh session means
that it will not process the packet directly (it will return
immediately), and either the other thread currently within that 'strand'
will process the next packet when it is done, or when that strand is no
longer 'in use', some other thread will be invoked with the process
packet callback.

The net effect of the above is that there is ALWAYS one async read event
in asio (so that we read as fast as possible), and that packets are
automatically serialized without needing to explicitly lock (blocking
the threads waiting on the lock, which I don't want to do under any

For socket writes, I would probably create a second strand (specific to
the socket) to handle that - and do all my async writes in that strand
guaranteeing that only one thread can write to that socket at the same
time (ie. I cannot get messages corrupting over the wire because of two
simultaneous writes).

To extend the above, if you really want multiple threads to be able to
handle multiple channels within the same ssh session at once, you could
assign a strand to each channel, and then once ProcessPacket has figured
out which channel it is for, it invokes ProcessChannelMessage inside a
strand specific to that channel.  Though I think this is overly
complicated and in the real world will gain little.

Just a note about strands - strands are a logical concept in asio that
simply means 'serialized events within the same strand.'  Executing
something within a strand does not mean that it will jump threads or
context switch - it just guarantees only one thread can work within that
strand at once, and if someone else is working within it, the current
thread will just return immediately (NOT block which is important) and
the thread currently working in that strand will likely handle that next
event when done.  You can also 'exit' a strand at any time (again
without thread jumping or context switching) too at any time.

This sounds more what you want - your 'X' and 'Y' threads above should
only be reading packets, without really CARING whether they are x or y
type packets.  Regardless of which is receives, it just calls
ProcessInput, which will figure out if it's an x or y type packet and
process it appropriately.  Preferably allowing for non-blocking
serialization, but if that is not possible, blocking off the 'next'
packet.  There is no reason for X and Y to care specifically about x and
y type packets unless thread-specific data is used.  These should be
generic workers, not a thread specific to the processing of x or y type
data.  That is only important to the ProcessInput function, which should
have all the contextual data to know what to do with it once it
determines the type.

PreZ :)

Archive administrator: postmaster@lists.cynapses.org