[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: channel_write will fail if large amount of data are being written (0.4 branch)
[Thread Prev] | [Thread Next]
- Subject: Re: channel_write will fail if large amount of data are being written (0.4 branch)
- From: Vic Lee <llyzs@xxxxxxx>
- Reply-to: libssh@xxxxxxxxxx
- Date: Sat, 23 Jan 2010 01:13:09 +0800
- To: libssh@xxxxxxxxxx
Hi Aris, It's the ssh_set_blocking() call in session.c. Maybe I shouldn't use it? If I don't set blocking to false, will there be any impact on channel_read_nonblocking? I also attach a patch which works for me, though I am not very sure if I am doing the right thing here. Thanks, Vic On Thu, 2010-01-21 at 22:22 +0100, Aris Adamantiadis wrote: > Hi Vic, > > Could you try to get a backtrace at this place ? > I wonder why ssh_socket_nonblocking_flush is called. I put a breakpoint > on this function and was not able to trigger it using samplessh. > Could you also check session->blocking value ? normaly it's set to 1 in > ssh_new and I could not find any other place where it's changed. > > Thanks, > > Aris > > Vic Lee a écrit : > > Hi, > > > > I found another bug in channel_write() which make it fail to tunnel an > > xterm over SSH. > > > > According to the description, channel_write() is a blocking write until > > all data written or until error. However, in the following call > > sequence: > > > > channel_write -> packet_send -> packet_send2 -> packet_write -> > > packet_flush (at packet.c:456), it uses a non-blocking call: > > > > rc = packet_flush(session, 0); > > > > which will almost always return SSH_AGAIN if large amount of data are > > being flushed, and SSH_AGAIN will eventually returned by packet_send. > > However, when channel_write calls packet_send it does not check for > > SSH_AGAIN and simply think anything other than SSH_OK is an error. > > > > This bug makes it impossible to tunnel an xterm (it's funny somehow > > xterm has large data transmit). I temporarily change the packet_flush to > > a blocking call fix the issue. But I think a right patch should be on > > channel_write, checking SSH_AGAIN. > > > > Your comments? > > > > Thanks, > > > > Vic > > > > > > >
From a34eaa16ebd7249333dc4d9abc0b0de49c180b42 Mon Sep 17 00:00:00 2001 From: Vic Lee <llyzs@xxxxxxx> Date: Sat, 23 Jan 2010 01:09:58 +0800 Subject: [PATCH 2/2] Check for SSH_AGAIN in channel_write_common for nonblocking case Signed-off-by: Vic Lee <llyzs@xxxxxxx> --- libssh/channels.c | 16 ++++++++++++---- 1 files changed, 12 insertions(+), 4 deletions(-) diff --git a/libssh/channels.c b/libssh/channels.c index 13ea25f..2e59ef1 100644 --- a/libssh/channels.c +++ b/libssh/channels.c @@ -878,6 +878,7 @@ int channel_write_common(ssh_channel channel, const void *data, ssh_session session = channel->session; int origlen = len; int effectivelen; + int rc; enter_function(); @@ -898,7 +899,7 @@ int channel_write_common(ssh_channel channel, const void *data, #ifdef WITH_SSH1 if (channel->version == 1) { - int rc = channel_write1(channel, data, len); + rc = channel_write1(channel, data, len); leave_function(); return rc; } @@ -934,9 +935,16 @@ int channel_write_common(ssh_channel channel, const void *data, goto error; } - if (packet_send(session) != SSH_OK) { - leave_function(); - return SSH_ERROR; + rc = packet_send(session); + switch (rc) { + case SSH_OK: + break; + case SSH_AGAIN: + packet_flush(session, 1); + break; + default: + leave_function(); + return SSH_ERROR; } ssh_log(session, SSH_LOG_RARE, -- 1.6.5
Re: channel_write will fail if large amount of data are being written (0.4 branch) | Vic Lee <llyzs@xxxxxxx> |
channel_write will fail if large amount of data are being written (0.4 branch) | Vic Lee <llyzs@xxxxxxx> |
Re: channel_write will fail if large amount of data are being written (0.4 branch) | Aris Adamantiadis <aris@xxxxxxxxxxxx> |