This is the mail archive of the gdb-patches@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [patch rfa:doco rfc:NEWS] mi1 -> mi2; rm mi0


Keith,

On Tuesday, October 1, 2002, at 04:42  PM, Keith Seitz wrote:

On Tue, 1 Oct 2002, Jim Ingham wrote:

BTW, I haven't seen the actual change Keith is planning here.  Will he
be sticking the command sequence cookie in the async result?  His
example didn't show the cookies.
For the record, the proposed change would look like:

(gdb)
200-break-insert main
=breakpoint-create,number="1"
200^done
(gdb)

Seems to me that reporting command results as an async notification
means that we are breaking the tie between the command and its results.
And is that necessarily bad? (More on my confusion below...)

It was very nice that I could issue a bunch of commands at some point
in the GUI code, then at another place gather up the results, and match
them to the initial commands by using the sequence ID's.
I'm afraid that I'm not following. Could you please expound? Why would
you want a tight coupling of commands and their results? What's wrong with
just issuing a command at GDB and waiting for GDB to tell you that
it's done something? It seems to me that the less serialization that a UI
does, the more time it has to spend in its event loop and the more
responsive it will feel to the user (barring the java way: create a
freakin' thread for everything).
I don't want a tight coupling in TIME, or in the output sequence. I want meta-info which tells me with 100% certainty that this result is the result of command foo, that I sent some time in the past, precisely so I DON'T have to rely on anything about the sequencing of input or output to make this connection.

Suppose I have a canned startup sequence that I send to gdb when the user starts up. This startup sequence sets a bunch of environment variables, does some other accounting and also retrieves the saved breakpoint list from the project, and sends those to the target. In the PB code, we make a bunch of command objects - one for each of these commands - put them on a queue of outstanding commands, stick the actual gdb command in a buffer, then fire the whole lot off to gdb. Then we sit at the other end of the communication loop sucking commands back from gdb. Each time we get a result, we use the cookie to match it to the command object, give the command object its data, and it does whatever it needs with it (for instance, create a real breakpoint object, and stick it in our list of breakpoints). Doing this is very straightforward, we can always map the results to the command that sent them with NO guesswork. And anything else that shows up is some asynchronous result that can be treated specially - some error condition or whatever.

As soon as I have to guess the connection between results from gdb and the command that originated them by position in the gdb output stream, this very nice model breaks down.


In the new way of doing things, we have to parse more carefully, and
assume that the =breakpoint-create that we just got was the one that
came from the -break-insert in the output just above it. It makes the
client stream parser have to be more stateful that in the mi0 version,
which doesn't seem to me all that good an idea. If the async event has
the cookie in it, this will be a little better, though it still means I
have a two-step accumulation phase for each command (wait for the async
result with the right cookie, then the done with the right cookie...)
Ditto the above. Maybe -break-insert and =breakpoint-create are bad
examples(?), but I am not able to imagine why it would matter which
command elicited the notification. Commands are issued, and something
happens. The only case I can imagine where this is important is when
an error occurs setting the breakpoint, but MI will (and will
continue) to report errors immediately.
In the break case, on the UI side I have something that, until I get the breakpoint number back from gdb, I can't do anything with. At the time I issue the breakpoint command, I can't make a complete break object on the GUI side. So a very convenient way to handle this is just to tell myself I have a pending command, and when the result comes back, actually make the now useful breakpoint object.

If the command results have the command cookie in them, I just wait till I get that cookie back, suck the data in and make the breakpoint object. In your new method, I have to keep a more careful eye on the output stream, fish the =breakpoint-create off the stream, know somehow (either because I only ever get to have one outstanding breakpoint, or because I keep a list of breakpoints in order of issuing) which command it actually goes to, continue to wait for the ^done - 'cause there's many a slip..., then finally make the object. Much more tedious.

Plus, what if there is an error? Then presumably I will get the ^error, but NOT the =breakpoint-create...

The other way to do it, is just to issue a bunch of stuff down the pipe, but not build any objects watching for results. Then I hang around on the UI side going "Oh, boy, I just got a breakpoint, yuk, yuk..." and make the object. Not very reliable, however.

You are just making life much harder for the client. It is REALLY nice to have results match up with commands. You want to minimize knowledge of the output flow, and your method of doing things actually requires more knowledge... The more I think about it the worse idea it appears to me...


Maybe something related to async targets? (BTW, there is no reason why
I/we cannot/could not put the old behavior back for an MI0 target, if
'mi0' were sticking around, which is a decision I want nothing to do
with, actually.) One is still pretty screwed, though, when
"interpreter-exec" is used, unless we revert from async notification back
to serial notification via hook hack that you're using.

<devil's advocate>
As far as the versioning thing goes, I must say that I don't really care,
(not that my opinion matters), but I can understand why some on this list
would be less sympathetic with objections to dropping mi0 coming from
Apple, who has done a lot of work on gdb and MI; no doubt fixed a lot of
stuff, but only managed to "submit" a giant distribution tarball of their
modified GDB. I wouldn't be too suprised if some thought that Apple was
taking advantage of the public's work. Mind you, I'm not saying that any
of this is true, but I wouldn't be suprised if some one reading this list
felt that way.
</devil's advocate>
This is unhelpful. The real point is that you CAN'T know who all the users of the MI are. If the MI is well done, they never need tell you that they are using it, nor should they. And requiring anyone planning to use the MI to follow gdb-patches closely that they might know when their software is likely to stop working is silly.

Jim
--
Jim Ingham jingham@apple.com
Developer Tools
Apple Computer


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]