This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC 0/7] Support for Linux kernel debugging


Hi Peter

On Wed, 25 Jan 2017 18:09:50 +0000
Peter Griffin <peter.griffin@linaro.org> wrote:

> Hi Philipp,
> 
> On Thu, 12 Jan 2017, Philipp Rudo wrote:
> 
> > Hi Peter
> > Hi everybody else
> > 
> > This series implements a new Linux kernel target very similar to
> > Peters patch sent shortly before Christmas [1]. In contrast to
> > Peter, i concentrated on core dumps on S390. Thus there are some
> > differences and we can benefit from each other.
> > 
> > The series's structure is the following. Patch 1 and 2 contain
> > small, general changes independent of the actual target which are a
> > pre-req for the rest. Patch 3 contains the basic target with the
> > same functionality of Peters patch. Patch 4 contains the
> > implementation for module handling and kernel virtual addresses.
> > Patch 5 contains some commands as a byproduct of development. Its
> > main purpose is to start a discussion how (C++/Python) and where
> > (GDB/kernel) commands for the target schould be implemented.
> > Finally patch 6 and 7 contain the S390 specific code with patch 7
> > containing the logic needed for the target.  The patches apply to
> > and compile with the current master. You need --enable-targets=all
> > to compile.
> > 
> > While the basic structure is very similar i made some different
> > design decisions compared to Peter. Most notably i store the needed
> > private data in a libiberty/htab (shouldn't be much of a problem to
> > move to std::unordered_map) with the variables name as key and its
> > address/struct type/struct field as value. Thus it is closer to the
> > approach using std::array Yao suggested [2].
> > 
> > In addition i also have a implementation to handle kernel modules.
> > Together with this goes the handling of kernel virtual addresses
> > which i quite like. Unfortunately generalizing this handling to any
> > kind of virtual address would need a mechanism to pass the address
> > space to target_xfer_partial.
> > 
> > The biggest drawback of my design currently is the mapping between
> > the CPUs and the task_strucs of the running tasks. Currently the
> > mapping is static, i.e. it is generated once at initialization and
> > cannot be updated.  
> 
> Live debug of a target is the main use case we are trying to support
> with the linux-kthread patches. So for us on-going thread
> synchronisation between GDB and the Linux target is a key feature we
> need.

For us live debugging is more a nice-to-have. That's why we
wanted to delay implementation of on-going synchronisation after the
basic structure of our code was discussed on the mailing list. So we
could avoid some work if we needed to rework it. Apparently we need to
change this plan now ;)

> > Knowing
> > this weakness i discussed quite long with Andreas how to improve
> > it. In this context we also discussed the reavenscar-approach Peter
> > is using. In the end we decided against this approach. In
> > particular we discussed a scenario when you also stack a userspace
> > target on top of the kernel target.  
> 
> How do you stack a userspace target on top with a coredump?

You don't. At least with the current code base it is impossible.

Andreas and I see the ravenscar-approach as a workaround for
limitations in GDB. Thus while discussing it we thought about possible
scenarios for the future which would be impossible to implement using
this approach. The userspace-on-kernel was just meant to be an example.
Other examples would be a Go target where the libthreaddb
(POSIX-threads) and Go (goroutines) targets would compete on
thread stratum. Or (for switching targets) a program that runs
simultaneous on CPU and GPU and needs different targets for both code
parts.
 
> > In this case
> > you would have three different views on threads (hardware, kernel,
> > userspace). With the ravenscar-approach this scenario is impossible
> > to implement as the private_thread_info is already occupied by the
> > kernel target and the userspace would have no chance to use it.
> > Furthermore you would like to have the possibility to switch
> > between different views, i.e. see what the kernel and userspace
> > view on a given process is.  
> 
> Is this a feature you are actively using today with the coredump
> (stacking userspace)?

We are not using it, as discussed above. In particular with our dumps
it is even impossible as we strip it of all userspace memory (a crash
of our build server created a ~9 GB dump (kdump, kernelspace only)
imagine adding all of userspace to that ...). But for live debugging or
smaller systems it could be an interesting scenario to find bugs
triggered by "buggy" userspace behavior.

> IMO we will need to converge on one 'Linux Kernel thread' layer for
> both core dumps and live debug use cases and as you point out with the
> current GDB threads using private_thread_info is mandatory to keep
> the GDB thread list synchronised with the target Linux kernel over
> time.
> 
> So my preference would be until the changes you talk about below (each
> target_ops being responsible for its own thread list) are
> implemented. If a user wants to see a userspace view of the system,
> they should launch a separate GDB session. Then the 'Linux kernel'
> threading layer can use private_thread_info like linux-kthreead and
> ravenscar do today, and it can be used for both core dumps and live
> debug use case.
>
> As an aside, what the original ST patches on which linux-kthread is
> very heavily based would do, is if halting the target and it was
> executing in userspace it would: -
> 
> 1) Switch MMU to kernel mapping
> 2) Read task_struct->mm
> 3) Pull in the user symbols from the root filesystem
> 4) Do the VM address translation.
> 
> This would then allow the backtrace to work right through from user
> space across the user/kernel boundary and into the kernel.

Sounds like an interesting idea. I haven't thought about a solution
like that. 

> > The idea to
> > solve this is to move the global thread_list to the target and
> > allow mapping between the threads of the different list. Thus every
> > target can manage its own threads. Furthermore there needs to be a
> > way to allow several targets at the same time and a way for the
> > user to switch between them. Of course this idea would need some
> > work to implement ...  
> 
> Having each target_ops have its own thread list seems like a very
> neat solution. However my preference would be to gate the 'stacking
> userspace' feature on implementing this functionality, rather than
> the other way around.

Well we thought about this solution when we thought we had plenty of
time. With the new situation it probably is better to first use your
approach and afterwards do the "proper fix". At least this is the
fastest way to have a working solution for both of us.

On the other hand, once there is a working workaround it's hard to keep
up the momentum to do the "proper fix"...
 
> > Unfortunately there is no way to test the code unless you have
> > access to an S390 machine. But if you like and give me some time i
> > can try to make an dump available to you. Cross debugging shouldn't
> > be a problem.  
> 
> A s390 dump could be useful for testing. Also how are you making the
> dumps (via kdump)?. As it could be useful to make an ARM dump for
> testing a ARM arch layer for your patch series.

I'll see how i can make a S390 dump available to you. I hope I have it
by beginning of next week.

Kdump is one possibility for us. Others are getting dumps from the
hypervisor (usually we are running in a VM) but those tools are
S390 specific. Nevertheless you could try getting a dump from a system
running in KVM/Qeum. At least in theory there should be a way for ARM,
too.

You just have to take care to take the dump in the ELF format. It is
the only dump format currently supported by GDB.

Best regards
Philipp


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]