This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: [Patch] Enable GDB remote protocol to support zlib-compressed xml


Hi Jonathan,

Your comments are great helpful. Thanks very much.
 
> 
> In the documentation, may I suggest some slightly clearer wording:
> 
> -=-=-=-=-=-=-=-=-
> @item compressedXfer
> The remote stub understands the
> @samp{qXfer:object:zread:annex:offset,length}
> packet, which is similar to @samp{qXfer:object:read:annex:offset:length}
> (@pxref{qXfer read}) but with the object contents in the response
> packet
> compressed as a zlib deflated stream, prepended with the uncompressed
> length
> of the whole object.  The length in the @samp{qXfer:zread} request
> refers to
> the maximum allowed packet size.  The uncompressed length is
> represented as a
> 64-bit value in little-endian byte order (that is, with the first byte
> being
> the least significant byte of the value, and the eighth byte being the
> most
> significant byte of the value).
> 
> It is not compulsory for a remote stub to allow compressed responses
> for all
> responses: if this feature is set, the host GDB will always try
> @samp{zread}
> first and fall back to plain @samp{read} if a null response is received
> from
> the remote stub.
> -=-=-=-=-=-=-=-=-
> and later:
> -=-=-=-=-=-=-=-=-
> @var{zread} works in a similar way to @var{read} except that the
> transferred
> objects must be compressed as a zlib deflated stream, prepended by the
> uncompressed length of the object.  This can allow remote stubs to
> reduce
> their memory footprint by returning pre-generated zlib-compressed data,
> as
> well as reducing communications overhead.  @var{zread} can only be used
> when
> the host gdb is built with Zlib support.
> -=-=-=-=-=-=-=-=-
> 

Much better than mine. I will use them.

> Although thinking about the uncompressed length, every other value in
> GDB's
> remote protocol which is endian specific always uses big endian, not
> little
> endian. So I recommend changing that (and if so, also my suggested
> replacement
> documentation wording above).
> 

OK. Will go with big endian assumption.

> For the code, I don't think you can call from target.c into remote.c
> for
> remote_packet_is_comprs_xfer(). remote is only one target vector. I
> think the
> GDB maintainers would consider this a layering violation, but hopefully
> one of
> them will be able to confirm for definite. I think the only sensible
> alternative is to perform the decompression within remote_read_qxfer.
> 

I also think it is a layering violation and want to avoid it. But I couldn't
find a way. 
Suppose we have a very large target.xml file which needs multiple transfers,
the stub may has two options to go:
a) compress it as a whole and then transfer it piece by piece 
or 
b) split and compress each pieces and then transfer them one by one.

If we go with option a), the remote_read_qxfer can't help because each
received packets are just part of a big zlib-compressed object. Only
function target_read_alloc_1 knows the end of the transfer of this big file.

If we go with option b), the remote_read_qxfer can help because it knows
each received packets are zlib-compressed.

> The code currently assumes that once we know whether compressed data is
> or is
> not supported for a particular packet, that setting is used for all
> objects
> used with that packet. Is that a good assumption? Just because one
> object is
> compressed, must we insist that all objects are compressed? Or vice
> versa.
> This is somewhat theoretical given that initially we are only concerned
> with
> features.xml, but if this is to be a general extension, this
> requirement for
> the stub should be clear (and documented).
> 

Maybe my code isn't so clear. But my intentions are:

a). The compressedXfer is a general stub feature and not specific to any
kind of gdb packets. Suppose the object for PACKET_qXfer_osdata will be
transferred in zlib-compressed format while the object for
PACKET_qXfer_libraries is still in plain format, once I know the status of
PACKET_qXfer_osdata, I won't assume same status for PACKET_qXfer_libraries,
I will still try zread first to detect its actual status.

b). If the object for PACKET_qXfer_libraries is very big and needs multiple
transfer, I will assume the remaining parts will be transferred in
compressed format once I know the first part is transferred in compressed
format.

> What about remote_write_qxfer()?
> 
I think the remote_wirte_qxfer means host gdb transfers zlib-compressed data
to gdb stub. This will need stub to spare more memory for zlib functions.
I can start to do it once the solution for read is accepted and there are
stubs that support decompress function. Otherwise I can't get a stub to
debug my code.

> In gdb_zlib_error() I don't think zlib is likely to be using
> stdin/stdout, so
> I doubt the handling of Z_ERRNO is correct. And in any case, there
> should have
> been an 'else' clause to handle errors which aren't ferrors for
> stdin/stdout
> anyway.
> 
> You also need to handle Z_BUF_ERROR.
> 

OK. I will update the way to handle Z_ERRNO.

BR,
Terry



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]