This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[FYI 5/6] GDB crash trying to subscript array of variant record.


Consider the following declarations:

   subtype Small_Type is Integer range 0 .. 10;
   type Record_Type (I : Small_Type := 0) is record
      S : String (1 .. I);
   end record;
   A2 : Array_Type := (1 => (I => 2, S => "AB"),
                       2 => (I => 1, S => "A"),
                       3 => (I => 0, S => <>));

Compiled with -fgnat-encodings=minimal, and trying to print
one element of our array, valgrind reports an invalid memory
access. On certain GNU/Linux boxes, malloc even reports it as
well, and causes GDB to crash.

    (gdb) print a2(1)
     *** glibc detected *** /[...]/gdb:
         malloc(): memory corruption: 0x0a30ba48 ***
    [crash]

The invalid memory access occurs because of a simple buffer
overflow in ada_value_primitive_packed_val. When this function
is called, it is given a bit_size of 128 (or 16 bytes), which
corresponds to the stride of our array. But the actual size of
each element depends on its value. In particular, A2(1) is a record
whose size is only 6 bytes.

What happens in our example is that we start building a new value
(v) where the element is to be unpacked, with any of its dynamic
properties getting resolved as well. We then unpack the data into
this value's buffer:

  unpacked = (unsigned char *) value_contents (v);
  [...]
  nsrc = len;
  [...]
  while (nsrc > 0)
    {
      [...]
          unpacked[targ] = accum & ~(~0L << HOST_CHAR_BIT);
          [...]
          targ += delta;
      [...]
      nsrc -= 1;
      [...]
    }

In the loop above, targ starts at zero (for LE architectures),
and len is 16. With delta being +1, we end up iterating 16 times,
writing 16 bytes into a 6-bytes buffer.

This patch fixes the issue by adjusting BIT_SIZE and recomputing
LEN after having resolved our type if the resolved type turns out
to be smaller than bit_size.

gdb/ChangeLog:

        * ada-lang.c (ada_value_primitive_packed_val): Recompute
        BIT_SIZE and LEN if the size of the resolved type is smaller
        than BIT_SIZE * HOST_CHAR_BIT.
---
 gdb/ada-lang.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/gdb/ada-lang.c b/gdb/ada-lang.c
index d71a243..bbc1732 100644
--- a/gdb/ada-lang.c
+++ b/gdb/ada-lang.c
@@ -2419,6 +2419,17 @@ ada_value_primitive_packed_val (struct value *obj, const gdb_byte *valaddr,
     {
       v = value_at (type, value_address (obj) + offset);
       type = value_type (v);
+      if (TYPE_LENGTH (type) * HOST_CHAR_BIT < bit_size)
+	{
+	  /* This can happen in the case of an array of dynamic objects,
+	     where the size of each element changes from element to element.
+	     In that case, we're initially given the array stride, but
+	     after resolving the element type, we find that its size is
+	     less than this stride.  In that case, adjust bit_size to
+	     match TYPE's length, and recompute LEN accordingly.  */
+	  bit_size = TYPE_LENGTH (type) * HOST_CHAR_BIT;
+	  len = TYPE_LENGTH (type) + (bit_offset + HOST_CHAR_BIT - 1) / 8;
+	}
       bytes = (unsigned char *) alloca (len);
       read_memory (value_address (v), bytes, len);
     }
-- 
1.9.1


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]