[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: channel_write will fail if large amount of data are being written (0.4 branch)
[Thread Prev] | [Thread Next]
- Subject: Re: channel_write will fail if large amount of data are being written (0.4 branch)
- From: Aris Adamantiadis <aris@xxxxxxxxxxxx>
- Reply-to: libssh@xxxxxxxxxx
- Date: Thu, 21 Jan 2010 22:22:51 +0100
- To: libssh@xxxxxxxxxx
Hi Vic, Could you try to get a backtrace at this place ?I wonder why ssh_socket_nonblocking_flush is called. I put a breakpoint on this function and was not able to trigger it using samplessh. Could you also check session->blocking value ? normaly it's set to 1 in ssh_new and I could not find any other place where it's changed.
Thanks, Aris Vic Lee a écrit :
Hi, I found another bug in channel_write() which make it fail to tunnel an xterm over SSH. According to the description, channel_write() is a blocking write until all data written or until error. However, in the following call sequence: channel_write -> packet_send -> packet_send2 -> packet_write -> packet_flush (at packet.c:456), it uses a non-blocking call: rc = packet_flush(session, 0); which will almost always return SSH_AGAIN if large amount of data are being flushed, and SSH_AGAIN will eventually returned by packet_send. However, when channel_write calls packet_send it does not check for SSH_AGAIN and simply think anything other than SSH_OK is an error. This bug makes it impossible to tunnel an xterm (it's funny somehow xterm has large data transmit). I temporarily change the packet_flush to a blocking call fix the issue. But I think a right patch should be on channel_write, checking SSH_AGAIN. Your comments? Thanks, Vic
Re: channel_write will fail if large amount of data are being written (0.4 branch) | Vic Lee <llyzs@xxxxxxx> |
channel_write will fail if large amount of data are being written (0.4 branch) | Vic Lee <llyzs@xxxxxxx> |