This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] AArch64 Port of GDB and GDBSERVER.


Following our earlier publication of GCC, binutils, newlib and libgloss
ports, ARM is pleased to announce a port of GDB to its AArch64 architecture.

Please note that while the port of GDB has been used to debug in anger,
it should still be considered a work-in-progress.

This port of GDB provides cross debugging support for targets
aarch64-none-elf and aarch64-none-linux-gnu via GDBSERVER and native
debugging of aarch64-none-linux-gnu.

The port includes support for hardware break and watch points, however
the ptrace() interface for this feature has recently changed in the
linux kernel, the relevant support in GDB and GDBSERVER will follow shortly.

This port has a number of limitations that we would like to address in
the future, notably:

* The aarch64-none-linux-gnu target support does not support TLS variables.

* The native aarch64-none-linux-gnu debugger does not support the debug
of aarch32 processes.

* Reading of ELF core files is not implemented.

/Marcus


Proposed ChangeLog entries:


Index: ChangeLog

2012-09-26  Jim MacArthur  <jim.macarthur@arm.com>
            Marcus Shawcroft  <marcus.shawcroft@arm.com>
            Nigel Stephens  <nigel.stephens@arm.com>
            Yufeng Zhang  <yufeng.zhang@arm.com>

	* Makefile.in: Add AArch64.
	* aarch64-linux-nat.c: New file.
	* aarch64-linux-tdep.c: New file.
	* aarch64-newlib-tdep.c: New file.
	* aarch64-tdep.c: New file.
	* aarch64-tdep.h: New file.
	* config/aarch64/aarch64-linux.mh: New file.
	* configure.host: Add AArch64.
	* configure.tgt: Add AArch64.
	* defs.h (enum gdb_osabi): Add GDB_OSABI_NEWLIB.
	* features/Makefile: Add AArch64.
	* features/aarch64-core.xml: New file.
	* features/aarch64-fpu.xml: New file.
	* features/aarch64-without-fpu.c: New file (generated).
	* features/aarch64-without-fpu.xml: New file.
	* features/aarch64.c: New file (generated).
	* features/aarch64.xml: New file.
	* osabi.c (gdb_osabi_names): Add "Newlib".
	* regformats/aarch64-without-fpu.dat: New file (generated).
	* regformats/aarch64.dat: New file (generated).

Index: gdbserver/ChangeLog

2012-09-26  Jim MacArthur  <jim.macarthur@arm.com>
            Marcus Shawcroft  <marcus.shawcroft@arm.com>
            Nigel Stephens  <nigel.stephens@arm.com>
            Yufeng Zhang  <yufeng.zhang@arm.com>

	* Makefile.in: Add AArch64.
	* configure.srv: Add AArch64.
	* linux-aarch64-low.c: New file.
	* linux-low.c: For various 'ptrace' calls, cast '0's as the 3rd
	and 4th arguments to PTRACE_ARG3_TYPE and PTRACE_ARG4_TYPE respectively.
diff --git a/gdb/Makefile.in b/gdb/Makefile.in
index bb1f0bc..bad326e 100644
--- a/gdb/Makefile.in
+++ b/gdb/Makefile.in
@@ -512,6 +512,7 @@ TARGET_OBS = @TARGET_OBS@
 # All target-dependent objects files that require 64-bit CORE_ADDR
 # (used with --enable-targets=all --enable-64-bit-bfd).
 ALL_64_TARGET_OBS = \
+	aarch64-linux-tdep.o aarch64-newlib-tdep.o aarch64-tdep.o \
 	alphabsd-tdep.o alphafbsd-tdep.o alpha-linux-tdep.o alpha-mdebug-tdep.o \
 	alphanbsd-tdep.o alphaobsd-tdep.o alpha-osf1-tdep.o alpha-tdep.o \
 	amd64fbsd-tdep.o amd64-darwin-tdep.o amd64-dicos-tdep.o \
@@ -767,7 +768,7 @@ osf-share/cma_deb_core.h osf-share/AT386/cma_thread_io.h \
 osf-share/cma_sched.h \
 common/gdb_signals.h common/gdb_thread_db.h common/gdb_vecs.h \
 common/i386-xstate.h common/linux-ptrace.h \
-proc-utils.h arm-tdep.h ax-gdb.h ppcnbsd-tdep.h	\
+proc-utils.h aarch64-tdep.h arm-tdep.h ax-gdb.h ppcnbsd-tdep.h	\
 cli-out.h gdb_expat.h breakpoint.h infcall.h obsd-tdep.h \
 exec.h m32r-tdep.h osabi.h gdbcore.h solib-som.h amd64bsd-nat.h \
 i386bsd-nat.h xml-support.h xml-tdesc.h alphabsd-tdep.h gdb_obstack.h \
@@ -1409,6 +1410,8 @@ force_update:
 MAKEOVERRIDES=
 
 ALLDEPFILES = \
+	aarch64-linux-nat.c \
+	aarch64-linux-tdep.c aarch64-newlib-tdep.c aarch64-tdep.c \
 	aix-thread.c \
 	alpha-nat.c alphabsd-nat.c alpha-linux-nat.c \
 	alpha-tdep.c alpha-mdebug-tdep.c \
diff --git a/gdb/aarch64-linux-nat.c b/gdb/aarch64-linux-nat.c
new file mode 100644
index 0000000..b4c987c
--- /dev/null
+++ b/gdb/aarch64-linux-nat.c
@@ -0,0 +1,901 @@
+/* Native-dependent code for GNU/Linux AArch64.
+
+   Copyright (C) 2011, 2012 Free Software Foundation, Inc.
+   Contributed by ARM Ltd.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "defs.h"
+#include "inferior.h"
+#include "gdbcore.h"
+#include "regcache.h"
+#include "linux-nat.h"
+#include "target-descriptions.h"
+#include "auxv.h"
+#include "aarch64-tdep.h"
+
+#include <elf/common.h>
+#include <sys/ptrace.h>
+#include <sys/utsname.h>
+
+#include "gregset.h"
+
+#include "features/aarch64.c"
+
+#ifndef PTRACE_GETHBPREGS
+#define PTRACE_GETHBPREGS 29
+#endif
+
+#ifndef PTRACE_SETHBPREGS
+#define PTRACE_SETHBPREGS 30
+#endif
+
+#ifndef TRAP_HWBKPT
+#define TRAP_HWBKPT 0x0004
+#endif
+
+#define AARCH64_HWP_ALIGNMENT 8
+#define AARCH64_HWP_MAX_LEN_PER_REG 8
+
+/* Macros to extract fields from the PTRACE_GETHBPREGS result.  */
+#define AARCH64_DEBUG_NUM_BPS(x) (((x) >> 0) & 0xff)
+#define AARCH64_DEBUG_NUM_WPS(x) (((x) >> 8) & 0xff)
+#define AARCH64_DEBUG_ARCH(x) (((x) >> 24) & 0xff)
+
+#define AARCH64_DEBUG_ARCH_V8 0x6
+
+/* On GNU/Linux, threads are implemented as pseudo-processes, in which
+   case we may be tracing more than one process at a time.  In that
+   case, inferior_ptid will contain the main process ID and the
+   individual thread (process) ID.  get_thread_id () is used to get
+   the thread id if it's available, and the process id otherwise.  */
+
+static int
+get_thread_id (ptid_t ptid)
+{
+  int tid = TIDGET (ptid);
+  if (0 == tid)
+    tid = PIDGET (ptid);
+  return tid;
+}
+
+#define GET_THREAD_ID(PTID)	get_thread_id (PTID)
+
+static int
+fetch_xregs_from_thread (gdb_gregset_t *regs)
+{
+  int ret, tid;
+  struct iovec iovec;
+  iovec.iov_base = regs;
+  iovec.iov_len = sizeof (* regs);
+
+  /* Get the thread id for the ptrace call.  */
+  tid = GET_THREAD_ID (inferior_ptid);
+
+  ret = ptrace (PTRACE_GETREGSET, tid, NT_PRSTATUS, &iovec);
+  return ret;
+}
+
+static void
+fetch_xregs (struct regcache *regcache)
+{
+  int ret, regno;
+  elf_gregset_t regs;
+
+  ret = fetch_xregs_from_thread (&regs);
+  if (ret < 0)
+    {
+      perror_with_name (_("Unable to fetch general registers."));
+      return;
+    }
+
+  for (regno = AARCH64_X0_REGNUM; regno <= AARCH64_CPSR_REGNUM; regno++)
+    regcache_raw_supply (regcache, regno,
+			 (char *) &regs[regno - AARCH64_X0_REGNUM]);
+}
+
+static void
+store_xregs (const struct regcache *regcache)
+{
+  int ret, regno, tid;
+  elf_gregset_t regs;
+  struct iovec iovec;
+
+  /* Get the thread id for the ptrace call.  */
+  tid = GET_THREAD_ID (inferior_ptid);
+
+  iovec.iov_base = &regs;
+  iovec.iov_len = sizeof (regs);
+
+  /* Fetch the general registers.  */
+  ret = ptrace (PTRACE_GETREGSET, tid, NT_PRSTATUS, &iovec);
+  if (ret < 0)
+    {
+      perror_with_name (_("Unable to fetch general registers."));
+      return;
+    }
+
+  for (regno = AARCH64_X0_REGNUM; regno <= AARCH64_CPSR_REGNUM; regno++)
+    {
+      if (REG_VALID == regcache_register_status (regcache, regno))
+	regcache_raw_collect (regcache, regno,
+			      (char *) &regs[regno - AARCH64_X0_REGNUM]);
+    }
+
+  ret = ptrace (PTRACE_SETREGSET, tid, NT_PRSTATUS, &iovec);
+  if (ret < 0)
+    {
+      perror_with_name (_("Unable to store general registers."));
+      return;
+    }
+}
+
+static int
+fetch_vregs_from_thread (gdb_fpregset_t *regs)
+{
+  int ret, tid;
+  struct iovec iovec;
+  iovec.iov_base = regs;
+  iovec.iov_len = sizeof (* regs);
+
+  /* Get the thread id for the ptrace call.  */
+  tid = GET_THREAD_ID (inferior_ptid);
+
+  ret = ptrace (PTRACE_GETREGSET, tid, NT_FPREGSET, &iovec);
+  return ret;
+}
+
+static void
+fetch_vregs (struct regcache *regcache)
+{
+  int ret, regno;
+  elf_fpregset_t regs;
+
+  ret = fetch_vregs_from_thread (&regs);
+  if (ret < 0)
+    {
+      perror_with_name (_("Unable to fetch FP/SIMD registers."));
+      return;
+    }
+
+  for (regno = AARCH64_V0_REGNUM; regno <= AARCH64_V31_REGNUM; regno++)
+    regcache_raw_supply (regcache, regno,
+			 (char *) &regs.vregs[regno - AARCH64_V0_REGNUM]);
+
+  regcache_raw_supply (regcache, AARCH64_FPSR_REGNUM, (char *) &regs.fpsr);
+  regcache_raw_supply (regcache, AARCH64_FPCR_REGNUM, (char *) &regs.fpcr);
+}
+
+static void
+store_vregs (const struct regcache *regcache)
+{
+  int ret, regno, tid;
+  elf_fpregset_t regs;
+  struct iovec iovec;
+
+  /* Get the thread id for the ptrace call.  */
+  tid = GET_THREAD_ID (inferior_ptid);
+
+  iovec.iov_base = &regs;
+  iovec.iov_len = sizeof (regs);
+
+  /* Fetch the general registers.  */
+  ret = ptrace (PTRACE_GETREGSET, tid, NT_FPREGSET, &iovec);
+  if (ret < 0)
+    {
+      perror_with_name (_("Unable to fetch FP/SIMD registers."));
+      return;
+    }
+
+  for (regno = AARCH64_V0_REGNUM; regno <= AARCH64_V31_REGNUM; regno++)
+    {
+      if (REG_VALID == regcache_register_status (regcache, regno))
+	regcache_raw_collect (regcache, regno,
+			      (char *) &regs.vregs[regno -
+						   AARCH64_V0_REGNUM]);
+    }
+
+  ret = ptrace (PTRACE_SETREGSET, tid, NT_FPREGSET, &iovec);
+  if (ret < 0)
+    {
+      perror_with_name (_("Unable to store FP/SIMD registers."));
+      return;
+    }
+}
+
+/* Fetch registers from the child process.  Fetch all registers if
+   regno == -1, otherwise fetch all general registers or all floating
+   point registers depending upon the value of regno.  */
+
+static void
+aarch64_linux_fetch_inferior_registers (struct target_ops *ops,
+					struct regcache *regcache, int regno)
+{
+  if (regno == -1)
+    {
+      fetch_xregs (regcache);
+      fetch_vregs (regcache);
+    }
+  else if (regno < AARCH64_V0_REGNUM)
+    fetch_xregs (regcache);
+  else
+    fetch_vregs (regcache);
+}
+
+/* Store registers back into the inferior.  Store all registers if
+   regno == -1, otherwise store all general registers or all floating
+   point registers depending upon the value of regno.  */
+static void
+aarch64_linux_store_inferior_registers (struct target_ops *ops,
+					struct regcache *regcache, int regno)
+{
+  if (regno == -1)
+    {
+      store_xregs (regcache);
+      store_vregs (regcache);
+    }
+  else if (regno < AARCH64_V0_REGNUM)
+    store_xregs (regcache);
+  else
+    store_vregs (regcache);
+}
+
+/* Wrapper functions for the standard regset handling, used by
+   thread debugging.  */
+void
+fill_gregset (const struct regcache *regcache,
+	      gdb_gregset_t *gregsetp, int regno)
+{
+  int ret;
+  ret = fetch_xregs_from_thread (gregsetp);
+  if (ret < 0)
+    {
+      perror_with_name (_("Unable to fetch general registers."));
+      return;
+    }
+}
+
+void
+supply_gregset (struct regcache *regcache, const gdb_gregset_t *gregsetp)
+{
+  fprintf (stderr, "Unimplemented: %s\n", __FUNCTION__);
+  exit (1);
+}
+
+void
+fill_fpregset (const struct regcache *regcache,
+	       gdb_fpregset_t *fpregsetp, int regno)
+{
+  int ret;
+  ret = fetch_vregs_from_thread (fpregsetp);
+  if (ret < 0)
+    {
+      perror_with_name (_("Unable to fetch V registers."));
+      return;
+    }
+}
+
+/* Fill GDB's register array with the floating-point register values
+   in *fpregsetp.  */
+
+void
+supply_fpregset (struct regcache *regcache, const gdb_fpregset_t *fpregsetp)
+{
+  fprintf (stderr, "Unimplemented: %s\n", __FUNCTION__);
+  exit (1);
+}
+
+static const struct target_desc *
+aarch64_linux_read_description (struct target_ops *ops)
+{
+  CORE_ADDR hwcap = 0;
+  const struct target_desc *result = NULL;
+
+  if (target_auxv_search (ops, AT_HWCAP, &hwcap) != 1)
+    {
+      return NULL;
+    }
+
+  initialize_tdesc_aarch64 ();
+  result = tdesc_aarch64;
+  return result;
+}
+
+static void
+aarch64_align_watchpoint (CORE_ADDR addr, int len, CORE_ADDR *aligned_addr_p,
+			  int *aligned_len_p, CORE_ADDR *next_addr_p,
+			  int *next_len_p)
+{
+  int aligned_len;
+  unsigned int offset;
+  CORE_ADDR aligned_addr;
+  const unsigned int alignment = AARCH64_HWP_ALIGNMENT;
+  const unsigned int max_wp_len = AARCH64_HWP_MAX_LEN_PER_REG;
+
+  /* As assumed by the algorithm.  */
+  gdb_assert (alignment == max_wp_len);
+
+  if (len <= 0)
+    return;
+
+  /* Address to be put into the hardware watchpoint value register must be
+     aligned.  */
+  offset = addr & (alignment - 1);
+  aligned_addr = addr - offset;
+
+  gdb_assert (offset >= 0 && offset < alignment);
+  gdb_assert (aligned_addr >= 0 && aligned_addr <= addr);
+  gdb_assert ((offset + len) > 0);
+
+  if ((offset + len) >= max_wp_len)
+    {
+      /* Need more than one watchpoint registers; truncate it at the
+         alignment boundary.  */
+      aligned_len = max_wp_len;
+      len -= (max_wp_len - offset);
+      addr += (max_wp_len - offset);
+      gdb_assert ((addr & (alignment - 1)) == 0);
+    }
+  else
+    {
+      /* Find the smallest valid length that is large enough to accommodate
+         this watchpoint.  */
+      static const unsigned char
+	aligned_len_array[AARCH64_HWP_MAX_LEN_PER_REG] =
+	{ 1, 2, 4, 4, 8, 8, 8, 8 };
+
+      aligned_len = aligned_len_array[offset + len - 1];
+      addr += len;
+      len = 0;
+    }
+
+  if (aligned_addr_p)
+    *aligned_addr_p = aligned_addr;
+  if (aligned_len_p)
+    *aligned_len_p = aligned_len;
+  if (next_addr_p)
+    *next_addr_p = addr;
+  if (next_len_p)
+    *next_len_p = len;
+
+  return;
+}
+
+/* Enum describing the different types of AArch64 hardware
+   break-/watch-points.  */
+typedef enum
+{
+  aarch64_hwbp_break = 0,
+  aarch64_hwbp_load = 1,
+  aarch64_hwbp_store = 2,
+  aarch64_hwbp_access = 3
+} aarch64_hwbp_type;
+
+/* Type describing an AArch64 Hardware Breakpoint Control register value.  */
+typedef unsigned int aarch64_hwbp_control_t;
+
+struct aarch64_linux_hw_breakpoint
+{
+  /* Address to break on, or being watched.  */
+  CORE_ADDR address;
+  /* Control register for break-/watch- point.  */
+  aarch64_hwbp_control_t control;
+};
+
+struct aarch64_linux_hwbp_cap
+{
+  int bp_count;
+  int wp_count;
+  int max_wp_length;
+};
+
+static const struct aarch64_linux_hwbp_cap *
+aarch64_linux_get_hwbp_cap (void)
+{
+  /* The info structure we return.  This function is called repeatedly, so we
+     return a pointer to a static variable which caches the result, rather than
+     calling ptrace repeatedly.  */
+  static struct aarch64_linux_hwbp_cap info;
+
+  /* Is INFO in a good state?  -1 means that no attempt has been made to
+     initialize INFO; 0 means an attempt has been made, but it failed; 1
+     means INFO is in an initialized state.  */
+  static int available = -1;
+
+  if (available == -1)
+    {
+      int tid;
+      unsigned int dr_info;
+
+      tid = GET_THREAD_ID (inferior_ptid);
+      if (ptrace (PTRACE_GETHBPREGS, tid, 0, &dr_info) == 0
+	  && AARCH64_DEBUG_ARCH (dr_info) == AARCH64_DEBUG_ARCH_V8)
+	{
+	  info.bp_count = AARCH64_DEBUG_NUM_BPS (dr_info);
+	  info.wp_count = AARCH64_DEBUG_NUM_WPS (dr_info);
+	  info.max_wp_length = AARCH64_HWP_MAX_LEN_PER_REG;
+	  available = 1;
+	}
+      else
+	available = 0;
+    }
+
+  return available == 1 ? &info : NULL;
+}
+
+/* How many hardware breakpoints are available?  */
+static int
+aarch64_linux_get_hw_breakpoint_count (void)
+{
+  const struct aarch64_linux_hwbp_cap *cap = aarch64_linux_get_hwbp_cap ();
+  return cap != NULL ? cap->bp_count : 0;
+}
+
+/* How many hardware watchpoints are available?  */
+static int
+aarch64_linux_get_hw_watchpoint_count (void)
+{
+  const struct aarch64_linux_hwbp_cap *cap = aarch64_linux_get_hwbp_cap ();
+  return cap != NULL ? cap->wp_count : 0;
+}
+
+/* Have we got a free break-/watch-point available for use?  Returns -1 if
+   there is not an appropriate resource available, otherwise returns 1.  */
+static int
+aarch64_linux_can_use_hw_breakpoint (int type, int cnt, int ot)
+{
+  if (type == bp_hardware_watchpoint || type == bp_read_watchpoint
+      || type == bp_access_watchpoint || type == bp_watchpoint)
+    {
+      if (cnt + ot > aarch64_linux_get_hw_watchpoint_count ())
+	return -1;
+    }
+  else if (type == bp_hardware_breakpoint)
+    {
+      if (cnt > aarch64_linux_get_hw_breakpoint_count ())
+	return -1;
+    }
+  else
+    {
+      gdb_assert (FALSE);
+      return -1;
+    }
+
+  return 1;
+}
+
+static aarch64_hwbp_control_t
+aarch64_hwbp_control_initialize (unsigned byte_address_select,
+				 aarch64_hwbp_type hwbp_type, int enable)
+{
+  gdb_assert ((byte_address_select & ~0xffU) == 0);
+  gdb_assert (hwbp_type != aarch64_hwbp_break
+	      || ((byte_address_select & 0xfU) != 0));
+
+  return (byte_address_select << 5) | (hwbp_type << 3) | (3 << 1) | enable;
+}
+
+/* Initialise the hardware breakpoint structure P.  The breakpoint will be
+   enabled, and will point to the placed address of BP_TGT.  */
+static void
+aarch64_linux_hw_breakpoint_initialize (struct gdbarch *gdbarch,
+					struct bp_target_info *bp_tgt,
+					struct aarch64_linux_hw_breakpoint *p)
+{
+  unsigned mask = 0xf;
+  CORE_ADDR address = bp_tgt->placed_address;
+
+  p->address = (unsigned int) (address & ~3);
+  p->control = aarch64_hwbp_control_initialize (mask, aarch64_hwbp_break, 1);
+}
+
+/* Get the AArch64 hardware breakpoint type from the RW value we're given when
+   asked to set a watchpoint.  */
+static aarch64_hwbp_type
+aarch64_linux_get_hwbp_type (int rw)
+{
+  if (rw == hw_read)
+    return aarch64_hwbp_load;
+  else if (rw == hw_write)
+    return aarch64_hwbp_store;
+  else
+    return aarch64_hwbp_access;
+}
+
+/* Initialize the hardware breakpoint structure P for a watchpoint at ADDR
+   to LEN.  The type of watchpoint is given in RW.  */
+static void
+aarch64_linux_hw_watchpoint_initialize (CORE_ADDR address, int len, int rw,
+					struct aarch64_linux_hw_breakpoint *p)
+{
+  unsigned mask;
+
+  mask = (1 << len) - 1;
+  p->address = address;
+  p->control =
+    aarch64_hwbp_control_initialize (mask, aarch64_linux_get_hwbp_type (rw), 1);
+}
+
+typedef struct aarch64_linux_thread_points
+{
+  /* Thread ID.  */
+  int tid;
+  /* Breakpoints for thread.  */
+  struct aarch64_linux_hw_breakpoint *bpts;
+  /* Watchpoint for threads.  */
+  struct aarch64_linux_hw_breakpoint *wpts;
+} *aarch64_linux_thread_points_p;
+DEF_VEC_P (aarch64_linux_thread_points_p);
+
+/* Vector of hardware breakpoints for each thread.  */
+VEC (aarch64_linux_thread_points_p) *aarch64_threads = NULL;
+
+static struct aarch64_linux_thread_points *
+aarch64_linux_find_breakpoints_by_tid (int tid, int alloc_new)
+{
+  int i;
+  struct aarch64_linux_thread_points *t;
+
+  for (i = 0;
+       VEC_iterate (aarch64_linux_thread_points_p, aarch64_threads, i, t);
+       ++i)
+    {
+      if (t->tid == tid)
+	return t;
+    }
+
+  t = NULL;
+
+  if (alloc_new)
+    {
+      t = xmalloc (sizeof (struct aarch64_linux_thread_points));
+      t->tid = tid;
+      t->bpts = xzalloc (aarch64_linux_get_hw_breakpoint_count ()
+			 * sizeof (struct aarch64_linux_hw_breakpoint));
+      t->wpts = xzalloc (aarch64_linux_get_hw_watchpoint_count ()
+			 * sizeof (struct aarch64_linux_hw_breakpoint));
+      VEC_safe_push (aarch64_linux_thread_points_p, aarch64_threads, t);
+    }
+
+  return t;
+}
+
+static int
+aarch64_hwbp_control_is_enabled (aarch64_hwbp_control_t control)
+{
+  return control & 0x1;
+}
+
+/* Change a breakpoint control word so that it is in the disabled state.  */
+static aarch64_hwbp_control_t
+aarch64_hwbp_control_disable (aarch64_hwbp_control_t control)
+{
+  return control & ~0x1;
+}
+
+/* Are two break-/watch-points equal?  */
+static int
+aarch64_linux_hw_breakpoint_equal (const struct aarch64_linux_hw_breakpoint *p1,
+				   const struct aarch64_linux_hw_breakpoint *p2)
+{
+  return p1->address == p2->address && p1->control == p2->control;
+}
+
+static unsigned long
+dr_idx_to_ptrace_addr_reg_idx (int is_watchpoint, int idx)
+{
+  return is_watchpoint ? -((idx << 1) + 1) : (idx << 1) + 1;
+}
+
+static unsigned long
+dr_idx_to_ptrace_ctrl_reg_idx (int is_watchpoint, int idx)
+{
+  return is_watchpoint ? -((idx << 1) + 2) : (idx << 1) + 2;
+}
+
+/* Remove the hardware breakpoint (WATCHPOINT = 0) or watchpoint
+   (WATCHPOINT = 1) BPT for thread TID.  */
+static void
+aarch64_linux_remove_hw_breakpoint1
+  (const struct aarch64_linux_hw_breakpoint *bpt, int tid, int watchpoint)
+{
+  struct aarch64_linux_thread_points *t =
+    aarch64_linux_find_breakpoints_by_tid (tid, 0);
+
+  gdb_byte count, i;
+  struct aarch64_linux_hw_breakpoint *bpts;
+  long int dir;
+
+  gdb_assert (t != NULL);
+
+  if (watchpoint)
+    {
+      count = aarch64_linux_get_hw_watchpoint_count ();
+      bpts = t->wpts;
+      dir = -1;
+    }
+  else
+    {
+      count = aarch64_linux_get_hw_breakpoint_count ();
+      bpts = t->bpts;
+      dir = 1;
+    }
+
+  for (i = 0; i < count; ++i)
+    {
+      if (aarch64_linux_hw_breakpoint_equal (bpt, bpts + i))
+	{
+	  bpts[i].control = aarch64_hwbp_control_disable (bpts[i].control);
+	  if (ptrace (PTRACE_SETHBPREGS, tid,
+		      dr_idx_to_ptrace_ctrl_reg_idx (watchpoint, i),
+		      &bpts[i].control) < 0)
+	    perror_with_name (_("Unexpected error clearing breakpoint"));
+	  break;
+	}
+    }
+
+  gdb_assert (i != count);
+}
+
+/* Remove a hardware breakpoint.  */
+static int
+aarch64_linux_remove_hw_breakpoint (struct gdbarch *gdbarch,
+				    struct bp_target_info *bp_tgt)
+{
+  struct lwp_info *lp;
+  struct aarch64_linux_hw_breakpoint p;
+
+  if (aarch64_linux_get_hw_breakpoint_count () == 0)
+    return -1;
+
+  aarch64_linux_hw_breakpoint_initialize (gdbarch, bp_tgt, &p);
+  ALL_LWPS (lp)
+    aarch64_linux_remove_hw_breakpoint1 (&p, TIDGET (lp->ptid), 0);
+
+  return 0;
+}
+
+/* Insert the hardware breakpoint (WATCHPOINT = 0) or watchpoint (WATCHPOINT
+   = 1) BPT for thread TID.  */
+static void
+aarch64_linux_insert_hw_breakpoint1
+  (const struct aarch64_linux_hw_breakpoint *bpt, int tid, int watchpoint)
+{
+  struct aarch64_linux_thread_points *t =
+    aarch64_linux_find_breakpoints_by_tid (tid, 1);
+  gdb_byte count, i;
+  struct aarch64_linux_hw_breakpoint *bpts;
+  int dir;
+  CORE_ADDR aligned_address = bpt->address & ~(0x3);
+
+  gdb_assert (t != NULL);
+
+  if (watchpoint)
+    {
+      count = aarch64_linux_get_hw_watchpoint_count ();
+      bpts = t->wpts;
+      dir = -1;
+    }
+  else
+    {
+      count = aarch64_linux_get_hw_breakpoint_count ();
+      bpts = t->bpts;
+      dir = 1;
+    }
+
+  for (i = 0; i < count; ++i)
+    if (!aarch64_hwbp_control_is_enabled (bpts[i].control))
+      {
+	if (ptrace (PTRACE_SETHBPREGS, tid,
+		    dr_idx_to_ptrace_addr_reg_idx (watchpoint, i),
+		    &aligned_address) < 0)
+	  perror_with_name (_("Unexpected error setting breakpoint address"));
+
+	if (ptrace (PTRACE_SETHBPREGS, tid,
+		    dr_idx_to_ptrace_ctrl_reg_idx (watchpoint, i),
+		    &bpt->control) < 0)
+	  perror_with_name (_("Unexpected error setting breakpoint control"));
+
+	memcpy (bpts + i, bpt, sizeof (struct aarch64_linux_hw_breakpoint));
+	break;
+      }
+
+  gdb_assert (i != count);
+}
+
+
+/* Insert a Hardware breakpoint.  */
+static int
+aarch64_linux_insert_hw_breakpoint (struct gdbarch *gdbarch,
+				    struct bp_target_info *bp_tgt)
+{
+  struct lwp_info *lp;
+  struct aarch64_linux_hw_breakpoint p;
+
+  if (aarch64_linux_get_hw_breakpoint_count () == 0)
+    return -1;
+
+  aarch64_linux_hw_breakpoint_initialize (gdbarch, bp_tgt, &p);
+  ALL_LWPS (lp)
+    aarch64_linux_insert_hw_breakpoint1 (&p, TIDGET (lp->ptid), 0);
+
+  return 0;
+}
+
+/* Are we able to use a hardware watchpoint for the LEN bytes starting at
+   ADDR?  */
+static int
+aarch64_linux_region_ok_for_hw_watchpoint (CORE_ADDR addr, int len)
+{
+  const struct aarch64_linux_hwbp_cap *cap = aarch64_linux_get_hwbp_cap ();
+  CORE_ADDR max_wp_length, aligned_addr;
+
+  /* Can not set watchpoints for zero or negative lengths.  */
+  if (len <= 0)
+    return 0;
+
+  /* Need to be able to use the ptrace interface.  */
+  if (cap == NULL || cap->wp_count == 0)
+    return 0;
+
+  /* Test that the range [ADDR, ADDR + LEN) fits into the largest address
+     range covered by a watchpoint.  */
+  max_wp_length = (CORE_ADDR) cap->max_wp_length;
+  aligned_addr = addr & ~(max_wp_length - 1);
+
+  if (aligned_addr + max_wp_length < addr + len)
+    return 0;
+
+  /* The current ptrace interface can only handle watchpoints that are a
+     power of 2.  */
+  if ((len & (len - 1)) != 0)
+    return 0;
+
+  /* All tests passed so we must be able to set a watchpoint.  */
+  return 1;
+}
+
+static int
+aarch64_linux_insert_watchpoint (CORE_ADDR addr, int len, int rw,
+				 struct expression *cond)
+{
+  struct lwp_info *lp;
+  struct aarch64_linux_hw_breakpoint p;
+
+  if (aarch64_linux_get_hw_watchpoint_count () == 0)
+    return -1;
+
+  while (len > 0)
+    {
+      CORE_ADDR aligned_addr;
+      int aligned_len;
+
+      aarch64_align_watchpoint (addr, len, &aligned_addr, &aligned_len,
+				&addr, &len);
+
+      aarch64_linux_hw_watchpoint_initialize (aligned_addr, aligned_len,
+					      rw, &p);
+      ALL_LWPS (lp)
+	aarch64_linux_insert_hw_breakpoint1 (&p, TIDGET (lp->ptid), 1);
+    }
+
+  return 0;
+}
+
+static int
+aarch64_linux_remove_watchpoint (CORE_ADDR addr, int len, int rw,
+				 struct expression *cond)
+{
+  struct lwp_info *lp;
+  struct aarch64_linux_hw_breakpoint p;
+
+  if (aarch64_linux_get_hw_watchpoint_count () == 0)
+    return -1;
+
+  while (len > 0)
+    {
+      CORE_ADDR aligned_addr;
+      int aligned_len;
+
+      aarch64_align_watchpoint (addr, len, &aligned_addr, &aligned_len,
+				&addr, &len);
+      aarch64_linux_hw_watchpoint_initialize (aligned_addr, aligned_len,
+					      rw, &p);
+      ALL_LWPS (lp)
+	aarch64_linux_remove_hw_breakpoint1 (&p, TIDGET (lp->ptid), 1);
+    }
+
+  return 0;
+}
+
+/* What was the data address the target was stopped on accessing.  */
+static int
+aarch64_linux_stopped_data_address (struct target_ops *target,
+				    CORE_ADDR *addr_p)
+{
+  siginfo_t siginfo;
+  int slot;
+
+  if (!linux_nat_get_siginfo (inferior_ptid, &siginfo))
+    return 0;
+
+  slot = siginfo.si_errno;
+
+  /* This must be a hardware breakpoint.  */
+  if (siginfo.si_signo != SIGTRAP
+      || (siginfo.si_code & 0xffff) != TRAP_HWBKPT)
+    return 0;
+
+  /* We must be able to set hardware watchpoints.  */
+  if (aarch64_linux_get_hw_watchpoint_count () == 0)
+    return 0;
+
+  /* If we are in a positive slot then we're looking at a breakpoint and not
+     a watchpoint.  */
+  if (slot >= 0)
+    return 0;
+
+  *addr_p = (CORE_ADDR) (uintptr_t) siginfo.si_addr;
+  return 1;
+}
+
+/* Has the target been stopped by hitting a watchpoint?  */
+static int
+aarch64_linux_stopped_by_watchpoint (void)
+{
+  CORE_ADDR addr;
+  return aarch64_linux_stopped_data_address (&current_target, &addr);
+}
+
+static int
+aarch64_linux_watchpoint_addr_within_range (struct target_ops *target,
+					    CORE_ADDR addr,
+					    CORE_ADDR start, int length)
+{
+  return start <= addr && start + length - 1 >= addr;
+}
+
+void _initialize_aarch64_linux_nat (void);
+
+void
+_initialize_aarch64_linux_nat (void)
+{
+  struct target_ops *t;
+
+  /* Fill in the generic GNU/Linux methods.  */
+  t = linux_target ();
+
+  /* Add our register access methods.  */
+  t->to_fetch_registers = aarch64_linux_fetch_inferior_registers;
+  t->to_store_registers = aarch64_linux_store_inferior_registers;
+
+  t->to_read_description = aarch64_linux_read_description;
+
+  t->to_can_use_hw_breakpoint = aarch64_linux_can_use_hw_breakpoint;
+  t->to_insert_hw_breakpoint = aarch64_linux_insert_hw_breakpoint;
+  t->to_remove_hw_breakpoint = aarch64_linux_remove_hw_breakpoint;
+  t->to_region_ok_for_hw_watchpoint =
+    aarch64_linux_region_ok_for_hw_watchpoint;
+  t->to_insert_watchpoint = aarch64_linux_insert_watchpoint;
+  t->to_remove_watchpoint = aarch64_linux_remove_watchpoint;
+  t->to_stopped_by_watchpoint = aarch64_linux_stopped_by_watchpoint;
+  t->to_stopped_data_address = aarch64_linux_stopped_data_address;
+  t->to_watchpoint_addr_within_range =
+    aarch64_linux_watchpoint_addr_within_range;
+
+  /* Register the target.  */
+  linux_nat_add_target (t);
+}
diff --git a/gdb/aarch64-linux-tdep.c b/gdb/aarch64-linux-tdep.c
new file mode 100644
index 0000000..e379068
--- /dev/null
+++ b/gdb/aarch64-linux-tdep.c
@@ -0,0 +1,272 @@
+/* Target-dependent code for GNU/Linux AArch64.
+
+   Copyright (C) 2009-2012 Free Software Foundation, Inc.
+   Contributed by ARM Ltd.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "defs.h"
+
+#include "gdbarch.h"
+#include "glibc-tdep.h"
+#include "linux-tdep.h"
+#include "aarch64-tdep.h"
+#include "osabi.h"
+#include "solib-svr4.h"
+#include "symtab.h"
+#include "tramp-frame.h"
+#include "trad-frame.h"
+
+#include "inferior.h"
+#include "regcache.h"
+#include "regset.h"
+#include <sys/ptrace.h>
+
+/* The general-purpose regset consists of 31 X registers, plus SP, PC, PSTATE
+   and two extra pseudo 64-bit registers, as defined in the AArch64 port of
+   the Linux kernel.  */
+#define AARCH64_LINUX_SIZEOF_GREGSET  (36 * X_REGISTER_SIZE)
+/* The fp regset consists of 32 V registers, plus FPCR and FPSR which are 4
+   bytes wide each, and the whole structure is padded to 128 bit alignment.  */
+#define AARCH64_LINUX_SIZEOF_FPREGSET (33 * V_REGISTER_SIZE)
+
+/* Signal frame handling.
+
+      +----------+  ^
+      | saved lr |  |
+   +->| saved fp |--+
+   |  |          |
+   |  |          |
+   |  +----------+
+   |  | saved lr |
+   +--| saved fp |
+   ^  |          |
+   |  |          |
+   |  +----------+
+   ^  |          |
+   |  | signal   |
+   |  |          |
+   |  | saved lr |-->interrupted_function_pc
+   +--| saved fp |
+   |  +----------+
+   |  | saved lr |--> default_restorer (movz x8, NR_sys_rt_sigreturn; svc 0)
+   +--| saved fp |<- FP
+      |          |
+      |          |<- SP
+      +----------+
+
+   On signal delivery, the kernel will create a signal handler stack
+   frame and setup the return address in LR to point at restorer
+   stub.  The signal stack frame is defined by:
+
+   struct rt_sigframe
+   {
+	siginfo_t info;
+	struct ucontext uc;
+   };
+
+   typeef struct
+   {
+     ...                                    128 bytes
+   } siginfo_t;
+
+   The ucontext has the following form:
+   struct ucontext {
+	unsigned long	  uc_flags;
+	struct ucontext  *uc_link;
+	stack_t		  uc_stack;
+	sigset_t	  uc_sigmask;
+	struct sigcontext uc_mcontext;
+  };
+
+  typedef struct sigaltstack {
+	void *ss_sp;
+	int ss_flags;
+	size_t ss_size;
+  } stack_t;
+
+  struct sigcontext {
+	unsigned long fault_address;
+	unsigned long regs[31];
+	unsigned long sp;	/ * 31 * /
+	unsigned long pc;	/ * 32 * /
+	unsigned long pstate;	/ * 33 * /
+	__u8 __reserved[4096]
+   };
+  The restorer stub will always have the form:
+
+  d28015a8        movz    x8, #0xad
+  d4000001        svc     #0x0
+
+  We detect signal frames by snooping the return code for the restorer
+  instruction sequence.
+
+  The handler then needs to recover the saved register set from
+  ucontext.uc_mcontext.  */
+
+static void
+aarch64_linux_sigframe_init (const struct tramp_frame *self,
+			     struct frame_info *this_frame,
+			     struct trad_frame_cache *this_cache,
+			     CORE_ADDR func);
+static const struct tramp_frame aarch64_linux_rt_sigframe = {
+  SIGTRAMP_FRAME,
+  4,
+  {
+    /* movz x8, 0x8b (S=1,o=10,h=0,i=0x8b,r=8)
+       Soo1 0010 1hhi iiii iiii iiii iiir rrrr  */
+    {0xd2801168, -1},
+
+    /* svc  0x0      (o=0, l=1)
+       1101 0100 oooi iiii iiii iiii iii0 00ll  */
+    {0xd4000001, -1},
+    {TRAMP_SENTINEL_INSN, -1}
+  },
+  aarch64_linux_sigframe_init
+};
+
+/* These magic numbers need to reflect the layout of the kernel
+   defined struct rt_sigframe and ucontext.  */
+#define AARCH64_SIGCONTEXT_REG_SIZE             8
+#define AARCH64_RT_SIGFRAME_UCONTEXT_OFFSET     128
+#define AARCH64_UCONTEXT_SIGCONTEXT_OFFSET      176
+#define AARCH64_SIGCONTEXT_XO_OFFSET            8
+
+static void
+aarch64_linux_sigframe_init (const struct tramp_frame *self,
+			     struct frame_info *this_frame,
+			     struct trad_frame_cache *this_cache,
+			     CORE_ADDR func)
+{
+  struct gdbarch *gdbarch = get_frame_arch (this_frame);
+  CORE_ADDR sp = get_frame_register_unsigned (this_frame, AARCH64_SP_REGNUM);
+  CORE_ADDR fp = get_frame_register_unsigned (this_frame, AARCH64_FP_REGNUM);
+  CORE_ADDR sigcontext_addr =
+    sp
+    + AARCH64_RT_SIGFRAME_UCONTEXT_OFFSET
+    + AARCH64_UCONTEXT_SIGCONTEXT_OFFSET;
+  int i;
+
+  for (i = 0; i < 31; i++)
+    {
+      trad_frame_set_reg_addr (this_cache,
+			       AARCH64_X0_REGNUM + i,
+			       sigcontext_addr + AARCH64_SIGCONTEXT_XO_OFFSET
+			       + i * AARCH64_SIGCONTEXT_REG_SIZE);
+    }
+
+  trad_frame_set_reg_addr (this_cache, AARCH64_FP_REGNUM, fp);
+  trad_frame_set_reg_addr (this_cache, AARCH64_LR_REGNUM, fp + 8);
+  trad_frame_set_reg_addr (this_cache, AARCH64_PC_REGNUM, fp + 8);
+
+  trad_frame_set_id (this_cache, frame_id_build (fp, func));
+}
+
+static void
+supply_gregset_from_core (const struct regset *regset,
+			  struct regcache *regcache,
+			  int regnum, const void *regbuf, size_t len)
+{
+  const gdb_byte *gregs = regbuf;
+  int regno;
+  CORE_ADDR reg_pc;
+
+  for (regno = AARCH64_X0_REGNUM; regno <= AARCH64_CPSR_REGNUM; regno++)
+    regcache_raw_supply (regcache, regno,
+			 gregs + X_REGISTER_SIZE
+			 * (regno - AARCH64_X0_REGNUM));
+
+  /* Note: We do not do anything with orig_X0 (the 35th entry in the register
+     buffer) at the moment.  */
+}
+
+static void
+supply_fpregset_from_core (const struct regset *regset,
+			   struct regcache *regcache,
+			   int regnum, const void *regbuf, size_t len)
+{
+  const gdb_byte *fregs = regbuf;
+  int regno;
+
+  for (regno = AARCH64_V0_REGNUM; regno <= AARCH64_V31_REGNUM; regno++)
+    regcache_raw_supply (regcache, regno,
+			 fregs + V_REGISTER_SIZE
+			 * (regno - AARCH64_V0_REGNUM));
+
+  regcache_raw_supply (regcache, AARCH64_FPSR_REGNUM,
+		       fregs + V_REGISTER_SIZE * 32);
+  regcache_raw_supply (regcache, AARCH64_FPCR_REGNUM,
+		       fregs + V_REGISTER_SIZE * 32 + 4);
+}
+
+static const struct regset *
+aarch64_linux_regset_from_core_section (struct gdbarch *gdbarch,
+					const char *sect_name,
+					size_t sect_size)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+  if (strcmp (sect_name, ".reg") == 0
+      && sect_size == AARCH64_LINUX_SIZEOF_GREGSET)
+    {
+      if (tdep->gregset == NULL)
+	tdep->gregset = regset_alloc (gdbarch, supply_gregset_from_core,
+				      NULL);
+      return tdep->gregset;
+    }
+
+  if (strcmp (sect_name, ".reg2") == 0
+      && sect_size == AARCH64_LINUX_SIZEOF_FPREGSET)
+    {
+      if (tdep->fpregset == NULL)
+	tdep->fpregset = regset_alloc (gdbarch, supply_fpregset_from_core,
+				       NULL);
+      return tdep->fpregset;
+    }
+  return NULL;
+}
+
+static void
+aarch64_linux_init_abi (struct gdbarch_info info, struct gdbarch *gdbarch)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+  tdep->lowest_pc = 0x8000;
+
+  set_solib_svr4_fetch_link_map_offsets (gdbarch,
+					 svr4_lp64_fetch_link_map_offsets);
+
+  /* Shared library handling.  */
+  set_gdbarch_skip_trampoline_code (gdbarch, find_solib_trampoline_target);
+
+  set_gdbarch_get_siginfo_type (gdbarch, linux_get_siginfo_type);
+  tramp_frame_prepend_unwinder (gdbarch, &aarch64_linux_rt_sigframe);
+
+  /* Enable longjmp */
+  tdep->jb_pc = 11;
+
+  set_gdbarch_regset_from_core_section (gdbarch,
+					aarch64_linux_regset_from_core_section);
+}
+
+/* Provide a prototype to silence -Wmissing-prototypes.  */
+extern initialize_file_ftype _initialize_aarch64_linux_tdep;
+
+void
+_initialize_aarch64_linux_tdep (void)
+{
+  gdbarch_register_osabi (bfd_arch_aarch64, 0, GDB_OSABI_LINUX,
+			  aarch64_linux_init_abi);
+}
diff --git a/gdb/aarch64-newlib-tdep.c b/gdb/aarch64-newlib-tdep.c
new file mode 100644
index 0000000..4392524
--- /dev/null
+++ b/gdb/aarch64-newlib-tdep.c
@@ -0,0 +1,45 @@
+/* Target-dependent code for Newlib AArch64.
+
+   Copyright (C) 2011, 2012 Free Software Foundation, Inc.
+   Contributed by ARM Ltd.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "defs.h"
+
+#include "gdbarch.h"
+#include "aarch64-tdep.h"
+#include "osabi.h"
+
+static void
+aarch64_newlib_init_abi (struct gdbarch_info info, struct gdbarch *gdbarch)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+  /* Jump buffer - support for longjmp */
+  /* Offset of original PC in jump buffer (in registers).  */
+  tdep->jb_pc = 11;
+}
+
+/* Provide a prototype to silence -Wmissing-prototypes.  */
+extern initialize_file_ftype _initialize_aarch64_newlib_tdep;
+
+void
+_initialize_aarch64_newlib_tdep (void)
+{
+  gdbarch_register_osabi (bfd_arch_aarch64, 0, GDB_OSABI_NEWLIB,
+			  aarch64_newlib_init_abi);
+}
diff --git a/gdb/aarch64-tdep.c b/gdb/aarch64-tdep.c
new file mode 100644
index 0000000..008c6c1
--- /dev/null
+++ b/gdb/aarch64-tdep.c
@@ -0,0 +1,2678 @@
+/* Common target dependent code for GDB on AArch64 systems.
+
+   Copyright (C) 2009-2012 Free Software Foundation, Inc.
+   Contributed by ARM Ltd.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include <inttypes.h>
+
+#include "defs.h"
+#include "frame.h"
+#include "inferior.h"
+#include "gdbcmd.h"
+#include "gdbcore.h"
+#include "gdb_string.h"
+#include "dis-asm.h"
+#include "regcache.h"
+#include "reggroups.h"
+#include "doublest.h"
+#include "value.h"
+#include "arch-utils.h"
+#include "osabi.h"
+#include "frame-unwind.h"
+#include "frame-base.h"
+#include "trad-frame.h"
+#include "objfiles.h"
+#include "dwarf2-frame.h"
+#include "gdbtypes.h"
+#include "prologue-value.h"
+#include "target-descriptions.h"
+#include "user-regs.h"
+#include "language.h"
+#include "infcall.h"
+
+#include "aarch64-tdep.h"
+
+#include "elf-bfd.h"
+#include "elf/aarch64.h"
+
+#include "gdb_assert.h"
+#include "vec.h"
+
+#include "features/aarch64.c"
+#include "features/aarch64-without-fpu.c"
+
+/* Pseudo register base numbers.  */
+#define AARCH64_Q0_REGNUM 0
+#define AARCH64_D0_REGNUM (AARCH64_Q0_REGNUM + 32)
+#define AARCH64_S0_REGNUM (AARCH64_D0_REGNUM + 32)
+#define AARCH64_H0_REGNUM (AARCH64_S0_REGNUM + 32)
+#define AARCH64_B0_REGNUM (AARCH64_H0_REGNUM + 32)
+
+/* Macros for swapping ints.  In the unlikely case that anybody else
+   needs these, move to a general header.  (A better solution might be
+   to define memory read routines that know whether they are reading
+   code or data.)  */
+
+#define SWAP_INT(x) \
+  (  (((x) & 0xff000000) >> 24)			\
+     | (((x) & 0x00ff0000) >> 8)		\
+     | (((x) & 0x0000ff00) << 8)		\
+     | (((x) & 0x000000ff) << 24))
+
+/* The standard register names, and all the valid aliases for them.  */
+static const struct
+{
+  const char *const name;
+  int regnum;
+} aarch64_register_aliases[] =
+{
+  /* 64-bit register names.  */
+  {"fp", AARCH64_FP_REGNUM},
+  {"lr", AARCH64_LR_REGNUM},
+  {"sp", AARCH64_SP_REGNUM},
+
+  /* 32-bit register names.  */
+  {"w0", AARCH64_X0_REGNUM + 0},
+  {"w1", AARCH64_X0_REGNUM + 1},
+  {"w2", AARCH64_X0_REGNUM + 2},
+  {"w3", AARCH64_X0_REGNUM + 3},
+  {"w4", AARCH64_X0_REGNUM + 4},
+  {"w5", AARCH64_X0_REGNUM + 5},
+  {"w6", AARCH64_X0_REGNUM + 6},
+  {"w7", AARCH64_X0_REGNUM + 7},
+  {"w8", AARCH64_X0_REGNUM + 8},
+  {"w9", AARCH64_X0_REGNUM + 9},
+  {"w10", AARCH64_X0_REGNUM + 10},
+  {"w11", AARCH64_X0_REGNUM + 11},
+  {"w12", AARCH64_X0_REGNUM + 12},
+  {"w13", AARCH64_X0_REGNUM + 13},
+  {"w14", AARCH64_X0_REGNUM + 14},
+  {"w15", AARCH64_X0_REGNUM + 15},
+  {"w16", AARCH64_X0_REGNUM + 16},
+  {"w17", AARCH64_X0_REGNUM + 17},
+  {"w18", AARCH64_X0_REGNUM + 18},
+  {"w19", AARCH64_X0_REGNUM + 19},
+  {"w20", AARCH64_X0_REGNUM + 20},
+  {"w21", AARCH64_X0_REGNUM + 21},
+  {"w22", AARCH64_X0_REGNUM + 22},
+  {"w23", AARCH64_X0_REGNUM + 23},
+  {"w24", AARCH64_X0_REGNUM + 24},
+  {"w25", AARCH64_X0_REGNUM + 25},
+  {"w26", AARCH64_X0_REGNUM + 26},
+  {"w27", AARCH64_X0_REGNUM + 27},
+  {"w28", AARCH64_X0_REGNUM + 28},
+  {"w29", AARCH64_X0_REGNUM + 29},
+  {"w30", AARCH64_X0_REGNUM + 30},
+
+  /*  specials */
+  {"ip0", AARCH64_X0_REGNUM + 16},
+  {"ip1", AARCH64_X0_REGNUM + 17}
+};
+
+struct required_register
+{
+  const char *name;
+  int number;
+};
+
+/* The required core 'R' registers.  */
+static const char *const aarch64_r_register_names[] = {
+  /* These registers must appear in consecutive RAW register number
+     order and they must begin with AARCH64_X0_REGNUM! */
+  "x0", "x1", "x2", "x3",
+  "x4", "x5", "x6", "x7",
+  "x8", "x9", "x10", "x11",
+  "x12", "x13", "x14", "x15",
+  "x16", "x17", "x18", "x19",
+  "x20", "x21", "x22", "x23",
+  "x24", "x25", "x26", "x27",
+  "x28", "x29", "x30", "sp",
+  "pc", "cpsr"
+};
+
+/* The 'V' registers.  */
+static const char *const aarch64_v_register_names[] = {
+  /* These registers must appear in consecutive RAW register number
+     order and they must begin with AARCH64_V0_REGNUM! */
+  "v0", "v1", "v2", "v3",
+  "v4", "v5", "v6", "v7",
+  "v8", "v9", "v10", "v11",
+  "v12", "v13", "v14", "v15",
+  "v16", "v17", "v18", "v19",
+  "v20", "v21", "v22", "v23",
+  "v24", "v25", "v26", "v27",
+  "v28", "v29", "v30", "v31",
+  "fpsr",
+  "fpcr"
+};
+
+struct aarch64_prologue_cache
+{
+  /* The stack pointer at the time this frame was created; i.e. the
+     caller's stack pointer when this function was called.  It is used
+     to identify this frame.  */
+  CORE_ADDR prev_sp;
+
+  /* The frame base for this frame is just prev_sp - frame size.
+     FRAMESIZE is the distance from the frame pointer to the
+     initial stack pointer.  */
+
+  int framesize;
+
+  /* The register used to hold the frame pointer for this frame.  */
+  int framereg;
+
+  /* Saved register offsets.  */
+  struct trad_frame_saved_reg *saved_regs;
+};
+
+
+static int aarch64_debug;
+
+static void
+show_aarch64_debug (struct ui_file *file, int from_tty,
+                    struct cmd_list_element *c, const char *value)
+{
+  fprintf_filtered (file, _("AArch64 debugging is %s.\n"), value);
+}
+
+/* Remove useless bits from addresses in a running program.  */
+
+static CORE_ADDR
+aarch64_addr_bits_remove (struct gdbarch *gdbarch, CORE_ADDR val)
+{
+  /* All instructions are 4-byte aligned.  */
+  return (val & ~(CORE_ADDR) 0x3);
+}
+
+static int32_t
+extract_signed_bitfield (uint32_t insn, unsigned width, unsigned offset)
+{
+  unsigned shift_l = sizeof (int32_t) * 8 - (offset + width);
+  unsigned shift_r = sizeof (int32_t) * 8 - width;
+
+  return ((int32_t) insn << shift_l) >> shift_r;
+}
+
+static int
+decode_masked_match (uint32_t insn, uint32_t mask, uint32_t pattern)
+{
+  return (insn & mask) == pattern;
+}
+
+static int
+decode_add_sub_imm (uint64_t addr, uint32_t insn, unsigned *rd, unsigned *rn,
+		    int32_t * imm)
+{
+  if ((insn & 0x9f000000) == 0x91000000)
+    {
+      unsigned shift;
+      unsigned op_is_sub;
+
+      *rd = (insn >> 0) & 0x1f;
+      *rn = (insn >> 5) & 0x1f;
+      *imm = (insn >> 10) & 0xfff;
+      shift = (insn >> 22) & 0x3;
+      op_is_sub = (insn >> 30) & 0x1;
+
+      switch (shift)
+	{
+	case 0:
+	  break;
+	case 1:
+	  *imm <<= 12;
+	  break;
+	default:
+	  /* UNDEFINED */
+	  return 0;
+	}
+
+      if (op_is_sub)
+	*imm = -*imm;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64 " 0x%x add x%u, x%u, #%d\n",
+			    addr, insn, *rd, *rn, *imm);
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_adrp (uint64_t addr, uint32_t insn, unsigned *rd)
+{
+  if (decode_masked_match (insn, 0x9f000000, 0x90000000))
+    {
+      *rd = (insn >> 0) & 0x1f;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64 " 0x%x adrp x%u, #?\n",
+			    addr, insn, *rd);
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_b (uint64_t addr, uint32_t insn, unsigned *link, int32_t * offset)
+{
+  /* b  0001 01ii iiii iiii iiii iiii iiii iiii */
+  /* bl 1001 01ii iiii iiii iiii iiii iiii iiii */
+  if (decode_masked_match (insn, 0x7c000000, 0x14000000))
+    {
+      *link = insn >> 31;
+      *offset = extract_signed_bitfield (insn, 26, 0) << 2;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64 " 0x%x %s 0x%" PRIx64 "\n",
+			    addr, insn, *link ? "bl" : "b", addr + *offset);
+
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_bcond (uint64_t addr, uint32_t insn, unsigned *cond, int32_t * offset)
+{
+  if (decode_masked_match (insn, 0xfe000000, 0x54000000))
+    {
+      *cond = (insn >> 0) & 0xf;
+      *offset = extract_signed_bitfield (insn, 19, 5) << 2;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64 " 0x%x b<%u> 0x%" PRIx64
+			    "\n", addr, insn, *cond, addr + *offset);
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_br (uint64_t addr, uint32_t insn, unsigned *link, unsigned *rn)
+{
+  /*         8   4   0   6   2   8   4   0 */
+  /* blr  110101100011111100000000000rrrrr */
+  /* br   110101100001111100000000000rrrrr */
+  if (decode_masked_match (insn, 0xffdffc1f, 0xd61f0000))
+    {
+      *link = (insn >> 21) & 1;
+      *rn = (insn >> 5) & 0x1f;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64 " 0x%x %s 0x%x\n",
+			    addr, insn, *link ? "blr" : "br", *rn);
+
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_cb (uint64_t addr,
+	   uint32_t insn, int *is64, unsigned *op, unsigned *rn,
+	   int32_t * offset)
+{
+  if (decode_masked_match (insn, 0x7e000000, 0x34000000))
+    {
+      /* cbz  T011 010o iiii iiii iiii iiii iiir rrrr */
+      /* cbnz T011 010o iiii iiii iiii iiii iiir rrrr */
+
+      *rn = (insn >> 0) & 0x1f;
+      *is64 = (insn >> 31) & 0x1;
+      *op = (insn >> 24) & 0x1;
+      *offset = extract_signed_bitfield (insn, 19, 5) << 2;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64 " 0x%x %s 0x%" PRIx64 "\n",
+			    addr, insn, *op ? "cbnz" : "cbz", addr + *offset);
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_eret (uint64_t addr, uint32_t insn)
+{
+  /* eret 1101 0110 1001 1111 0000 0011 1110 0000 */
+  if (insn == 0xd69f03e0)
+    {
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr, "decode: 0x%" PRIx64 " 0x%x eret\n",
+			    addr, insn);
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_movz (uint64_t addr, uint32_t insn, unsigned *rd)
+{
+  if (decode_masked_match (insn, 0xff800000, 0x52800000))
+    {
+      *rd = (insn >> 0) & 0x1f;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64 " 0x%x movz x%u, #?\n",
+			    addr, insn, *rd);
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_orr_shifted_register_x (uint64_t addr,
+			       uint32_t insn, unsigned *rd, unsigned *rn,
+			       unsigned *rm, int32_t * imm)
+{
+  if (decode_masked_match (insn, 0xff200000, 0xaa000000))
+    {
+      *rd = (insn >> 0) & 0x1f;
+      *rn = (insn >> 5) & 0x1f;
+      *rm = (insn >> 16) & 0x1f;
+      *imm = (insn >> 10) & 0x3f;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64
+			    " 0x%x orr x%u, x%u, x%u, #%u\n", addr, insn, *rd,
+			    *rn, *rm, *imm);
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_ret (uint64_t addr, uint32_t insn, unsigned *rn)
+{
+  if (decode_masked_match (insn, 0xfffffc1f, 0xd65f0000))
+    {
+      *rn = (insn >> 5) & 0x1f;
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64 " 0x%x ret x%u\n", addr,
+			    insn, *rn);
+      return 1;
+    }
+  return 0;
+}
+
+/* Decode: stp rt,rt2, [rn, #imm] */
+static int
+decode_stp_offset (uint64_t addr,
+		   uint32_t insn,
+		   unsigned *rt1, unsigned *rt2, unsigned *rn, int32_t * imm)
+{
+  if (decode_masked_match (insn, 0xffc00000, 0xa9000000))
+    {
+      *rt1 = (insn >> 0) & 0x1f;
+      *rn = (insn >> 5) & 0x1f;
+      *rt2 = (insn >> 10) & 0x1f;
+      *imm = extract_signed_bitfield (insn, 7, 15);
+      *imm <<= 3;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64
+			    " 0x%x stp x%u, x%u, [x%u + #%d]\n", addr, insn,
+			    *rt1, *rt2, *rn, *imm);
+      return 1;
+    }
+  return 0;
+}
+
+/* Decode: stp rt,rt2, [rn, #imm]! */
+static int
+decode_stp_offset_wb (uint64_t addr,
+		      uint32_t insn,
+		      unsigned *rt1, unsigned *rt2, unsigned *rn,
+		      int32_t * imm)
+{
+  if (decode_masked_match (insn, 0xffc00000, 0xa9800000))
+    {
+      *rt1 = (insn >> 0) & 0x1f;
+      *rn = (insn >> 5) & 0x1f;
+      *rt2 = (insn >> 10) & 0x1f;
+      *imm = extract_signed_bitfield (insn, 7, 15);
+      *imm <<= 3;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64
+			    " 0x%x stp x%u, x%u, [x%u + #%d]!\n", addr, insn,
+			    *rt1, *rt2, *rn, *imm);
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_stur (uint64_t addr, uint32_t insn, int *is64, unsigned *rt,
+	     unsigned *rn, int32_t * imm)
+{
+  if (decode_masked_match (insn, 0xbfe00c00, 0xb8000000))
+    {
+      *is64 = (insn >> 30) & 1;
+      *rt = (insn >> 0) & 0x1f;
+      *rn = (insn >> 5) & 0x1f;
+      *imm = extract_signed_bitfield (insn, 9, 12);
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64
+			    " 0x%x stur %c%u, [x%u + #%d]\n", addr, insn,
+			    *is64 ? 'x' : 'w', *rt, *rn, *imm);
+      return 1;
+    }
+  return 0;
+}
+
+static int
+decode_tb (uint64_t addr,
+	   uint32_t insn, unsigned *op, unsigned *bit, unsigned *rn,
+	   int32_t * offset)
+{
+  if (decode_masked_match (insn, 0x7e000000, 0x36000000))
+    {
+      /* tbz  b011 0110 bbbb biii iiii iiii iiir rrrr */
+      /* tbnz B011 0111 bbbb biii iiii iiii iiir rrrr */
+
+      *rn = (insn >> 0) & 0x1f;
+      *op = insn & (1 << 24);
+      *bit = ((insn >> (31 - 4)) & 0x20) | ((insn >> 19) & 0x1f);
+      *offset = extract_signed_bitfield (insn, 14, 5) << 2;
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stderr,
+			    "decode: 0x%" PRIx64 " 0x%x %s x%u, #%u, 0x%"
+			    PRIx64 "\n", addr, insn, *op ? "tbnz" : "tbz",
+			    *rn, *bit, addr + *offset);
+      return 1;
+    }
+  return 0;
+}
+
+/* Analyze a prologue, looking for a recognizable stack frame
+   and frame pointer.  Scan until we encounter a store that could
+   clobber the stack frame unexpectedly, or an unknown instruction.  */
+static CORE_ADDR
+aarch64_analyze_prologue (struct gdbarch *gdbarch,
+			  CORE_ADDR start, CORE_ADDR limit,
+			  struct aarch64_prologue_cache *cache)
+{
+  enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch);
+  int i;
+  pv_t regs[32];
+  struct pv_area *stack;
+  struct cleanup *back_to;
+
+  for (i = 0; i < 32; i++)
+    regs[i] = pv_register (i, 0);
+  stack = make_pv_area (AARCH64_SP_REGNUM, gdbarch_addr_bit (gdbarch));
+  back_to = make_cleanup_free_pv_area (stack);
+
+  for (; start < limit; start += 4)
+    {
+      uint32_t insn;
+      unsigned rd;
+      unsigned rn;
+      unsigned rm;
+      unsigned rt;
+      unsigned rt1;
+      unsigned rt2;
+      int op_is_sub;
+      int32_t imm;
+      unsigned cond;
+      unsigned is64;
+      unsigned is_link;
+      unsigned op;
+      unsigned bit;
+      int32_t offset;
+
+      insn = read_memory_unsigned_integer (start, 4, byte_order_for_code);
+
+      if (decode_add_sub_imm (start, insn, &rd, &rn, &imm))
+	{
+	  regs[rd] = pv_add_constant (regs[rn], imm);
+	}
+      else if (decode_adrp (start, insn, &rd))
+	{
+	  regs[rd] = pv_unknown ();
+	}
+      else if (decode_b (start, insn, &is_link, &offset))
+	{
+	  /* Stop analysis on branch.  */
+	  break;
+	}
+      else if (decode_bcond (start, insn, &cond, &offset))
+	{
+	  /* Stop analysis on branch.  */
+	  break;
+	}
+      else if (decode_br (start, insn, &is_link, &rn))
+	{
+	  /* Stop analysis on branch.  */
+	  break;
+	}
+      else if (decode_cb (start, insn, &is64, &op, &rn, &offset))
+	{
+	  /* Stop analysis on branch.  */
+	  break;
+	}
+      else if (decode_eret (start, insn))
+	{
+	  /* Stop analysis on branch.  */
+	  break;
+	}
+      else if (decode_movz (start, insn, &rd))
+	{
+	  regs[rd] = pv_unknown ();
+	}
+      else
+	if (decode_orr_shifted_register_x (start, insn, &rd, &rn, &rm, &imm))
+	{
+	  if (imm == 0 && rn == 31)
+	    {
+	      regs[rd] = regs[rm];
+	    }
+	  else
+	    {
+	      if (aarch64_debug)
+		fprintf_unfiltered (gdb_stderr,
+				    "aarch64: prologue analysis gave up addr=0x%"
+				    PRIx64 " "
+				    "opcode=0x%x (orr x register)\n", start,
+				    insn);
+	      break;
+	    }
+	}
+      else if (decode_ret (start, insn, &rn))
+	{
+	  /* Stop analysis on branch.  */
+	  break;
+	}
+      else if (decode_stur (start, insn, &is64, &rt, &rn, &offset))
+	{
+	  pv_area_store (stack, pv_add_constant (regs[rn], offset),
+			 is64 ? 8 : 4, regs[rt]);
+	}
+      else if (decode_stp_offset (start, insn, &rt1, &rt2, &rn, &imm))
+	{
+	  /* If recording this store would invalidate the store area
+	     (perhaps because rn is not known) then we should abandon
+	     further prologue analysis.  */
+	  if (pv_area_store_would_trash
+	      (stack, pv_add_constant (regs[rn], imm))
+	      || pv_area_store_would_trash (stack,
+					    pv_add_constant (regs[rn],
+							     imm + 8)))
+	    break;
+
+	  pv_area_store (stack, pv_add_constant (regs[rn], imm), 8,
+			 regs[rt1]);
+	  pv_area_store (stack, pv_add_constant (regs[rn], imm + 8), 8,
+			 regs[rt2]);
+	}
+      else if (decode_stp_offset_wb (start, insn, &rt1, &rt2, &rn, &imm))
+	{
+	  /* If recording this store would invalidate the store area
+	     (perhaps because rn is not known) then we should abandon
+	     further prologue analysis.  */
+	  if (pv_area_store_would_trash (stack,
+					 pv_add_constant (regs[rn], imm)) ||
+	      pv_area_store_would_trash (stack,
+					 pv_add_constant (regs[rn], imm + 8)))
+	    break;
+
+	  pv_area_store (stack, pv_add_constant (regs[rn], imm), 8,
+			 regs[rt1]);
+	  pv_area_store (stack, pv_add_constant (regs[rn], imm + 8), 8,
+			 regs[rt2]);
+	  regs[rn] = pv_add_constant (regs[rn], imm);
+	}
+      else if (decode_tb (start, insn, &op, &bit, &rn, &offset))
+	{
+	  /* Stop analysis on branch.  */
+	  break;
+	}
+      else
+	{
+	  if (aarch64_debug)
+	    fprintf_unfiltered (gdb_stderr,
+				"aarch64: prologue analysis gave up addr=0x%"
+				PRIx64 " opcode=0x%x\n", start, insn);
+	  break;
+	}
+    }
+
+  if (cache == NULL)
+    {
+      do_cleanups (back_to);
+      return start;
+    }
+
+  if (pv_is_register (regs[AARCH64_FP_REGNUM], AARCH64_SP_REGNUM))
+    {
+      /* Frame pointer is fp.  Frame size is constant.  */
+      cache->framereg = AARCH64_FP_REGNUM;
+      cache->framesize = -regs[AARCH64_FP_REGNUM].k;
+    }
+  else if (pv_is_register (regs[AARCH64_SP_REGNUM], AARCH64_SP_REGNUM))
+    {
+      /* Try the stack pointer.  On stepping into a function, prior to */
+      cache->framesize = -regs[AARCH64_SP_REGNUM].k;
+      cache->framereg = AARCH64_SP_REGNUM;
+    }
+  else
+    {
+      /* We're just out of luck.  We don't know where the frame is.  */
+      cache->framereg = -1;
+      cache->framesize = 0;
+    }
+
+  for (i = 0; i < 32; i++)
+    {
+      CORE_ADDR offset;
+      if (pv_area_find_reg (stack, gdbarch, i, &offset))
+	cache->saved_regs[i].addr = offset;
+    }
+
+  do_cleanups (back_to);
+  return start;
+}
+
+/* Advance the PC across any function entry prologue instructions to
+   reach some "real" code.  */
+
+static CORE_ADDR
+aarch64_skip_prologue (struct gdbarch *gdbarch, CORE_ADDR pc)
+{
+  unsigned long inst;
+  CORE_ADDR skip_pc;
+  CORE_ADDR func_addr, limit_pc;
+  struct symtab_and_line sal;
+
+  /* If we're in a dummy frame, don't even try to skip the prologue.  */
+  if (deprecated_pc_in_call_dummy (gdbarch, pc))
+    return pc;
+
+  /* See if we can determine the end of the prologue via the symbol table.
+     If so, then return either PC, or the PC after the prologue, whichever
+     is greater.  */
+  if (find_pc_partial_function (pc, NULL, &func_addr, NULL))
+    {
+      CORE_ADDR post_prologue_pc =
+	skip_prologue_using_sal (gdbarch, func_addr);
+      if (post_prologue_pc != 0)
+	return max (pc, post_prologue_pc);
+    }
+
+  /* Can't determine prologue from the symbol table, need to examine
+     instructions.  */
+
+  /* Find an upper limit on the function prologue using the debug
+     information.  If the debug information could not be used to provide
+     that bound, then use an arbitrary large number as the upper bound.  */
+  limit_pc = skip_prologue_using_sal (gdbarch, pc);
+  if (limit_pc == 0)
+    limit_pc = pc + 128;	/* Magic.  */
+
+  /* Try disassembling prologue.  */
+  return aarch64_analyze_prologue (gdbarch, pc, limit_pc, NULL);
+}
+
+static void
+aarch64_scan_prologue (struct frame_info *this_frame,
+		       struct aarch64_prologue_cache *cache)
+{
+  CORE_ADDR block_addr = get_frame_address_in_block (this_frame);
+  CORE_ADDR prologue_start;
+  CORE_ADDR prologue_end;
+  CORE_ADDR prev_pc = get_frame_pc (this_frame);
+  struct gdbarch *gdbarch = get_frame_arch (this_frame);
+
+  /* Assume we do not find a frame.  */
+  cache->framereg = -1;
+  cache->framesize = 0;
+
+  if (find_pc_partial_function (block_addr, NULL, &prologue_start,
+				&prologue_end))
+    {
+      struct symtab_and_line sal = find_pc_line (prologue_start, 0);
+
+      if (sal.line == 0)	/* no line info, use current PC  */
+	prologue_end = prev_pc;
+      else if (sal.end < prologue_end)	/* next line begins after fn end */
+	prologue_end = sal.end;	/* (probably means no prologue)  */
+
+      prologue_end = min (prologue_end, prev_pc);
+      aarch64_analyze_prologue (gdbarch, prologue_start, prologue_end, cache);
+    }
+  else
+    {
+      CORE_ADDR frame_loc;
+      LONGEST saved_fp;
+      LONGEST saved_lr;
+      enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+      frame_loc = get_frame_register_unsigned (this_frame, AARCH64_FP_REGNUM);
+      if (frame_loc == 0)
+	return;
+
+      cache->framereg = AARCH64_FP_REGNUM;
+      cache->framesize = 16;
+      cache->saved_regs[29].addr = 0;
+      cache->saved_regs[30].addr = 8;
+    }
+}
+
+static struct aarch64_prologue_cache *
+aarch64_make_prologue_cache (struct frame_info *this_frame)
+{
+  struct aarch64_prologue_cache *cache;
+  CORE_ADDR unwound_fp;
+  int reg;
+
+  cache = FRAME_OBSTACK_ZALLOC (struct aarch64_prologue_cache);
+  cache->saved_regs = trad_frame_alloc_saved_regs (this_frame);
+
+  aarch64_scan_prologue (this_frame, cache);
+
+  if (cache->framereg == -1)
+    return cache;
+
+  unwound_fp = get_frame_register_unsigned (this_frame, cache->framereg);
+  if (unwound_fp == 0)
+    return cache;
+
+  cache->prev_sp = unwound_fp + cache->framesize;
+
+  /* Calculate actual addresses of saved registers using offsets
+     determined by aarch64_analyze_prologue.  */
+  for (reg = 0; reg < gdbarch_num_regs (get_frame_arch (this_frame)); reg++)
+    if (trad_frame_addr_p (cache->saved_regs, reg))
+      cache->saved_regs[reg].addr += cache->prev_sp;
+
+  return cache;
+}
+
+/* Our frame ID for a normal frame is the current function's starting PC
+   and the caller's SP when we were called.  */
+static void
+aarch64_prologue_this_id (struct frame_info *this_frame,
+			  void **this_cache, struct frame_id *this_id)
+{
+  struct aarch64_prologue_cache *cache;
+  struct frame_id id;
+  CORE_ADDR pc, func;
+
+  if (*this_cache == NULL)
+    *this_cache = aarch64_make_prologue_cache (this_frame);
+  cache = *this_cache;
+
+  /* This is meant to halt the backtrace at "_start".  */
+  pc = get_frame_pc (this_frame);
+  if (pc <= gdbarch_tdep (get_frame_arch (this_frame))->lowest_pc)
+    return;
+
+  /* If we've hit a wall, stop.  */
+  if (cache->prev_sp == 0)
+    return;
+
+  func = get_frame_func (this_frame);
+  id = frame_id_build (cache->prev_sp, func);
+  *this_id = id;
+}
+
+static struct value *
+aarch64_prologue_prev_register (struct frame_info *this_frame,
+				void **this_cache, int prev_regnum)
+{
+  struct gdbarch *gdbarch = get_frame_arch (this_frame);
+  struct aarch64_prologue_cache *cache;
+
+  if (*this_cache == NULL)
+    *this_cache = aarch64_make_prologue_cache (this_frame);
+  cache = *this_cache;
+
+  /* If we are asked to unwind the PC, then we need to return the LR
+     instead.  The prologue may save PC, but it will point into this
+     frame's prologue, not the next frame's resume location.  Also
+     strip the saved T bit.  A valid LR may have the low bit set, but
+     a valid PC never does.  */
+  if (prev_regnum == AARCH64_PC_REGNUM)
+    {
+      CORE_ADDR lr;
+      lr = frame_unwind_register_unsigned (this_frame, AARCH64_LR_REGNUM);
+      return frame_unwind_got_constant (this_frame, prev_regnum,
+					aarch64_addr_bits_remove (gdbarch,
+								  lr));
+    }
+
+  /* SP is generally not saved to the stack, but this frame is
+     identified by the next frame's stack pointer at the time of the call.
+     The value was already reconstructed into PREV_SP.  */
+  /*
+   *     +----------+  ^
+   *     | saved lr |  |
+   *  +->| saved fp |--+
+   *  |  |          |
+   *  |  |          |     <- Previous SP
+   *  |  +----------+
+   *  |  | saved lr |
+   *  +--| saved fp |<- FP
+   *     |          |
+   *     |          |<- SP
+   *     +----------+
+   */
+  if (prev_regnum == AARCH64_SP_REGNUM)
+    return frame_unwind_got_constant (this_frame, prev_regnum,
+				      cache->prev_sp);
+
+  return trad_frame_get_prev_register (this_frame, cache->saved_regs,
+				       prev_regnum);
+}
+
+struct frame_unwind aarch64_prologue_unwind = {
+  NORMAL_FRAME,
+  default_frame_unwind_stop_reason,
+  aarch64_prologue_this_id,
+  aarch64_prologue_prev_register,
+  NULL,
+  default_frame_sniffer
+};
+
+static struct aarch64_prologue_cache *
+aarch64_make_stub_cache (struct frame_info *this_frame)
+{
+  int reg;
+  struct aarch64_prologue_cache *cache;
+  CORE_ADDR unwound_fp;
+
+  cache = FRAME_OBSTACK_ZALLOC (struct aarch64_prologue_cache);
+  cache->saved_regs = trad_frame_alloc_saved_regs (this_frame);
+
+  cache->prev_sp =
+    get_frame_register_unsigned (this_frame, AARCH64_SP_REGNUM);
+
+  return cache;
+}
+
+/* Our frame ID for a stub frame is the current SP and LR.  */
+static void
+aarch64_stub_this_id (struct frame_info *this_frame,
+		      void **this_cache, struct frame_id *this_id)
+{
+  struct aarch64_prologue_cache *cache;
+
+  if (*this_cache == NULL)
+    *this_cache = aarch64_make_stub_cache (this_frame);
+  cache = *this_cache;
+
+  *this_id = frame_id_build (cache->prev_sp, get_frame_pc (this_frame));
+}
+
+static int
+aarch64_stub_unwind_sniffer (const struct frame_unwind *self,
+			     struct frame_info *this_frame,
+			     void **this_prologue_cache)
+{
+  CORE_ADDR addr_in_block;
+  char dummy[4];
+
+  addr_in_block = get_frame_address_in_block (this_frame);
+  if (in_plt_section (addr_in_block, NULL)
+      || target_read_memory (get_frame_pc (this_frame), dummy, 4) != 0)
+    return 1;
+
+  return 0;
+}
+
+struct frame_unwind aarch64_stub_unwind = {
+  NORMAL_FRAME,
+  default_frame_unwind_stop_reason,
+  aarch64_stub_this_id,
+  aarch64_prologue_prev_register,
+  NULL,
+  aarch64_stub_unwind_sniffer
+};
+
+static CORE_ADDR
+aarch64_normal_frame_base (struct frame_info *this_frame, void **this_cache)
+{
+  struct aarch64_prologue_cache *cache;
+
+  if (*this_cache == NULL)
+    *this_cache = aarch64_make_prologue_cache (this_frame);
+  cache = *this_cache;
+
+  return cache->prev_sp - cache->framesize;
+}
+
+struct frame_base aarch64_normal_base = {
+  &aarch64_prologue_unwind,
+  aarch64_normal_frame_base,
+  aarch64_normal_frame_base,
+  aarch64_normal_frame_base
+};
+
+/* Assuming THIS_FRAME is a dummy, return the frame ID of that
+   dummy frame.  The frame ID's base needs to match the TOS value
+   saved by save_dummy_frame_tos () and returned from
+   aarch64_push_dummy_call, and the PC needs to match the dummy frame's
+   breakpoint.  */
+static struct frame_id
+aarch64_dummy_id (struct gdbarch *gdbarch, struct frame_info *this_frame)
+{
+  return frame_id_build (get_frame_register_unsigned (this_frame,
+						      AARCH64_SP_REGNUM),
+			 get_frame_pc (this_frame));
+}
+
+/* Given THIS_FRAME, find the previous frame's resume PC (which will
+   be used to construct the previous frame's ID, after looking up the
+   containing function).  */
+static CORE_ADDR
+aarch64_unwind_pc (struct gdbarch *gdbarch, struct frame_info *this_frame)
+{
+  CORE_ADDR pc;
+  pc = frame_unwind_register_unsigned (this_frame, AARCH64_PC_REGNUM);
+  return aarch64_addr_bits_remove (gdbarch, pc);
+}
+
+static CORE_ADDR
+aarch64_unwind_sp (struct gdbarch *gdbarch, struct frame_info *this_frame)
+{
+  return frame_unwind_register_unsigned (this_frame, AARCH64_SP_REGNUM);
+}
+
+static struct value *
+aarch64_dwarf2_prev_register (struct frame_info *this_frame,
+			      void **this_cache, int regnum)
+{
+  struct gdbarch *gdbarch = get_frame_arch (this_frame);
+  CORE_ADDR lr, cpsr;
+
+  switch (regnum)
+    {
+    case AARCH64_PC_REGNUM:
+      /* The PC is normally copied from the return column, which
+         describes saves of LR.  However, that version may have an
+         extra bit set to indicate Thumb state.  The bit is not
+         part of the PC.  */
+      lr = frame_unwind_register_unsigned (this_frame, AARCH64_LR_REGNUM);
+      return frame_unwind_got_constant (this_frame, regnum,
+					aarch64_addr_bits_remove (gdbarch,
+								  lr));
+
+    default:
+      internal_error (__FILE__, __LINE__,
+		      _("Unexpected register %d"), regnum);
+    }
+}
+
+static void
+aarch64_dwarf2_frame_init_reg (struct gdbarch *gdbarch, int regnum,
+			       struct dwarf2_frame_state_reg *reg,
+			       struct frame_info *this_frame)
+{
+  switch (regnum)
+    {
+    case AARCH64_PC_REGNUM:
+      reg->how = DWARF2_FRAME_REG_FN;
+      reg->loc.fn = aarch64_dwarf2_prev_register;
+      break;
+    case AARCH64_SP_REGNUM:
+      reg->how = DWARF2_FRAME_REG_CFA;
+      break;
+    }
+}
+
+/* When arguments must be pushed onto the stack, they go on in reverse
+   order.  The code below implements a FILO (stack) to do this.  */
+struct stack_item
+{
+  int len;
+  struct stack_item *prev;
+  void *data;
+};
+
+static struct stack_item *
+push_stack_item (struct stack_item *prev, const bfd_byte *contents, int len)
+{
+  struct stack_item *si;
+  si = xmalloc (sizeof (struct stack_item));
+  si->data = xmalloc (len);
+  si->len = len;
+  si->prev = prev;
+  memcpy (si->data, contents, len);
+  return si;
+}
+
+static struct stack_item *
+pop_stack_item (struct stack_item *si)
+{
+  struct stack_item *dead = si;
+  si = si->prev;
+  xfree (dead->data);
+  xfree (dead);
+  return si;
+}
+
+/* Return the alignment (in bytes) of the given type.  */
+static int
+aarch64_type_align (struct type *t)
+{
+  int n;
+  int align;
+  int falign;
+
+  t = check_typedef (t);
+  switch (TYPE_CODE (t))
+    {
+    default:
+      /* Should never happen.  */
+      internal_error (__FILE__, __LINE__, _("unknown type alignment"));
+      return 4;
+
+    case TYPE_CODE_PTR:
+    case TYPE_CODE_ENUM:
+    case TYPE_CODE_INT:
+    case TYPE_CODE_FLT:
+    case TYPE_CODE_SET:
+    case TYPE_CODE_RANGE:
+    case TYPE_CODE_BITSTRING:
+    case TYPE_CODE_REF:
+    case TYPE_CODE_CHAR:
+    case TYPE_CODE_BOOL:
+      return TYPE_LENGTH (t);
+
+    case TYPE_CODE_ARRAY:
+    case TYPE_CODE_COMPLEX:
+      return aarch64_type_align (TYPE_TARGET_TYPE (t));
+
+    case TYPE_CODE_STRUCT:
+    case TYPE_CODE_UNION:
+      align = 1;
+      for (n = 0; n < TYPE_NFIELDS (t); n++)
+	{
+	  falign = aarch64_type_align (TYPE_FIELD_TYPE (t, n));
+	  if (falign > align)
+	    align = falign;
+	}
+      return align;
+    }
+}
+
+static int
+is_hfa (struct type *ty)
+{
+  switch (TYPE_CODE (ty))
+    {
+    case TYPE_CODE_ARRAY:
+      {
+	struct type *target_ty = TYPE_TARGET_TYPE (ty);
+	if (TYPE_CODE (target_ty) == TYPE_CODE_FLT && TYPE_LENGTH (ty) <= 4)
+	  return 1;
+	break;
+      }
+
+    case TYPE_CODE_UNION:
+    case TYPE_CODE_STRUCT:
+      {
+	if (TYPE_NFIELDS (ty) > 0 && TYPE_NFIELDS (ty) <= 4)
+	  {
+	    struct type *member0_type;
+	    member0_type = check_typedef (TYPE_FIELD_TYPE (ty, 0));
+	    if (TYPE_CODE (member0_type) == TYPE_CODE_FLT)
+	      {
+		int i;
+		for (i = 0; i < TYPE_NFIELDS (ty); i++)
+		  {
+		    struct type *member1_type;
+		    member1_type = check_typedef (TYPE_FIELD_TYPE (ty, i));
+		    if (TYPE_CODE (member0_type) != TYPE_CODE (member1_type)
+			|| (TYPE_LENGTH (member0_type)
+			    != TYPE_LENGTH (member1_type)))
+		      {
+			return 0;
+		      }
+		  }
+		return 1;
+	      }
+	  }
+	return 0;
+      }
+
+    default:
+      break;
+    }
+
+  return 0;
+}
+
+struct aarch64_call_info
+{
+  unsigned argnum;
+  unsigned ncrn;
+  unsigned nvrn;
+  unsigned nsaa;
+  struct stack_item *si;
+};
+
+/* Pass a value in a sequence of consecutive X registers.  The caller
+   is responsbile for ensuring sufficient registers are available.  */
+static void
+pass_in_x (struct gdbarch *gdbarch, struct regcache *regcache,
+	   struct aarch64_call_info *info, struct type *type,
+	   const bfd_byte *buf)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+  int len = TYPE_LENGTH (type);
+  enum type_code typecode = TYPE_CODE (type);
+  int regnum = AARCH64_X0_REGNUM + info->ncrn;
+
+  info->argnum++;
+
+  while (len > 0)
+    {
+      int partial_len = len < X_REGISTER_SIZE ? len : X_REGISTER_SIZE;
+      CORE_ADDR regval = extract_unsigned_integer (buf, partial_len,
+						   byte_order);
+
+
+      /* Adjust sub-word struct/union args when big-endian.  */
+      if (byte_order == BFD_ENDIAN_BIG
+	  && partial_len < X_REGISTER_SIZE
+	  && (typecode == TYPE_CODE_STRUCT || typecode == TYPE_CODE_UNION))
+	regval <<= ((X_REGISTER_SIZE - partial_len) * TARGET_CHAR_BIT);
+
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stdlog, "arg %d in %s = 0x%s\n",
+			    info->argnum,
+			    gdbarch_register_name (gdbarch, regnum),
+			    phex (regval, X_REGISTER_SIZE));
+      regcache_cooked_write_unsigned (regcache, regnum, regval);
+      len -= partial_len;
+      buf += partial_len;
+      regnum++;
+    }
+}
+
+/* Attempt to marshall a value in a V register.  Return 1 if
+   successful, or 0 if insufficient registers are available.  This
+   function, unlike the equivalent pass_in_x() function does not
+   handle arguments spread across multiple registers.  */
+static int
+pass_in_v (struct gdbarch *gdbarch,
+	   struct regcache *regcache,
+	   struct aarch64_call_info *info,
+	   const bfd_byte *buf)
+{
+  if (info->nvrn < 8)
+    {
+      enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+      int regnum = AARCH64_V0_REGNUM + info->nvrn;
+
+      info->argnum++;
+      info->nvrn++;
+
+      regcache_cooked_write (regcache, regnum, buf);
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stdlog, "arg %d in %s\n",
+			    info->argnum,
+			    gdbarch_register_name (gdbarch, regnum));
+      return 1;
+    }
+  info->nvrn = 8;
+  return 0;
+}
+
+/* Marshall an argument onto the stack.  */
+static void
+pass_on_stack (struct aarch64_call_info *info, struct type *type,
+	       const bfd_byte *buf)
+{
+  int len = TYPE_LENGTH (type);
+  int align;
+
+  info->argnum++;
+
+  align = aarch64_type_align (type);
+
+  /* PCS C.17 Stack should be aligned to the larger of 8 bytes or the
+     Natural alignment of the argument's type.  */
+  align = align_up (align, 8);
+
+  /* The AArch64 PCS requires at most doubleword alignment.  */
+  if (align > 16)
+    align = 16;
+
+  if (aarch64_debug)
+    fprintf_unfiltered (gdb_stdlog, "arg %d len=%d @ sp + %d\n",
+			info->argnum, len, info->nsaa);
+
+  info->si = push_stack_item (info->si, buf, len);
+
+  info->nsaa += len;
+  if (info->nsaa & (align - 1))
+    {
+      /* Push stack alignment padding.  */
+      int pad = align - (info->nsaa & (align - 1));
+      info->si = push_stack_item (info->si, buf, pad);
+      info->nsaa += pad;
+    }
+}
+
+/* Marshall an argument into a sequence of one or more consecutive X
+   registers or, if insufficient X registers are available then onto
+   the stack.  */
+static void
+pass_in_x_or_stack (struct gdbarch *gdbarch, struct regcache *regcache,
+		    struct aarch64_call_info *info, struct type *type,
+		    const bfd_byte *buf)
+{
+  int len = TYPE_LENGTH (type);
+  int nregs = (len + X_REGISTER_SIZE - 1) / X_REGISTER_SIZE;
+
+  /* PCS C.13 - Pass in registers if we have enough spare */
+  if (info->ncrn + nregs <= 8)
+    {
+      pass_in_x (gdbarch, regcache, info, type, buf);
+      info->ncrn += nregs;
+    }
+  else
+    {
+      info->ncrn = 8;
+      pass_on_stack (info, type, buf);
+    }
+}
+
+/* Pass a value in a V register, or on the stack if insufficient are
+   available.  */
+static void
+pass_in_v_or_stack (struct gdbarch *gdbarch,
+		    struct regcache *regcache,
+		    struct aarch64_call_info *info,
+		    struct type *type,
+		    const bfd_byte *buf)
+{
+  if (!pass_in_v (gdbarch, regcache, info, buf))
+    pass_on_stack (info, type, buf);
+}
+
+static CORE_ADDR
+aarch64_push_dummy_call (struct gdbarch *gdbarch, struct value *function,
+			 struct regcache *regcache, CORE_ADDR bp_addr,
+			 int nargs,
+			 struct value **args, CORE_ADDR sp, int struct_return,
+			 CORE_ADDR struct_addr)
+{
+  int nstack = 0;
+  int argnum;
+  int x_argreg;
+  int v_argreg;
+  struct aarch64_call_info info;
+  struct type *func_type;
+  struct type *return_type;
+  int lang_struct_return;
+
+  memset (&info, 0, sizeof (info));
+
+  /* We need to know what the type of the called function is in order
+     to determine the number of named/anonymous arguments for the
+     actual argument placement, and the return type in order to handle
+     return value correctly.
+
+     The generic code above us views the decision of return in memory
+     or return in registers as a two stage processes.  The language
+     handler is consulted first and may decide to return in memory (eg
+     class with copy constructor returned by value), this will cause
+     the generic code to allocate space AND insert an initial leading
+     argument.
+
+     If the language code does not decide to pass in memory then the
+     target code is consulted.
+
+     If the language code decides to pass in memory we want to move
+     the pointer inserted as the intial argument from the argument
+     list and into X8, the conventional AArch64 struct return pointer
+     register.
+
+     This is slightly awkward, ideally the flag "lang_struct_return"
+     would be passed to the targets implementation of push_dummy_call.
+     Rather that change the target interface we call the language code
+     directly ourselves.  */
+
+  func_type = check_typedef (value_type (function));
+
+  /* Dereference function pointer types.  */
+  if (TYPE_CODE (func_type) == TYPE_CODE_PTR)
+    func_type = TYPE_TARGET_TYPE (func_type);
+
+  gdb_assert (TYPE_CODE (func_type) == TYPE_CODE_FUNC ||
+	      TYPE_CODE (func_type) == TYPE_CODE_METHOD);
+
+  /* If language_pass_by_reference () returned true we will have been
+     given an additional initial argument, a hidden pointer to the
+     return slot in memory.  */
+  return_type = TYPE_TARGET_TYPE (func_type);
+  lang_struct_return = language_pass_by_reference (return_type);
+
+  /* Set the return address.  For the AArch64, the return breakpoint
+     is always at BP_ADDR.  */
+  regcache_cooked_write_unsigned (regcache, AARCH64_LR_REGNUM, bp_addr);
+
+  /* If we were given an initial argument for the return slot because
+     lang_struct_return was true.  Lose it.  */
+  if (lang_struct_return)
+    {
+      args++;
+      nargs--;
+    }
+
+  /* The struct_return pointer occupies X8.  */
+  if (struct_return || lang_struct_return)
+    {
+      if (aarch64_debug)
+	fprintf_unfiltered (gdb_stdlog, "struct return in %s = 0x%s\n",
+			    gdbarch_register_name
+			    (gdbarch,
+			     AARCH64_STRUCT_RETURN_REGNUM),
+			    paddress (gdbarch, struct_addr));
+      regcache_cooked_write_unsigned (regcache, AARCH64_STRUCT_RETURN_REGNUM,
+				      struct_addr);
+    }
+
+  for (argnum = 0; argnum < nargs; argnum++)
+    {
+      struct value *arg = args[argnum];
+      struct type *arg_type;
+      int len;
+
+      arg_type = check_typedef (value_type (arg));
+      len = TYPE_LENGTH (arg_type);
+
+      switch (TYPE_CODE (arg_type))
+	{
+	case TYPE_CODE_INT:
+	case TYPE_CODE_BOOL:
+	case TYPE_CODE_CHAR:
+	case TYPE_CODE_RANGE:
+	case TYPE_CODE_ENUM:
+	  if (len < 4)
+	    {
+	      /* Promote to 32 bit integer.  */
+	      if (TYPE_UNSIGNED (arg_type))
+		arg_type = builtin_type (gdbarch)->builtin_uint32;
+	      else
+		arg_type = builtin_type (gdbarch)->builtin_int32;
+	      arg = value_cast (arg_type, arg);
+	    }
+	  pass_in_x_or_stack (gdbarch, regcache, &info, arg_type,
+			      value_contents (arg));
+	  break;
+
+	case TYPE_CODE_COMPLEX:
+	  if (info.nvrn <= 6)
+	    {
+	      const bfd_byte *buf = value_contents (arg);
+	      struct type *target_type =
+		check_typedef (TYPE_TARGET_TYPE (arg_type));
+	      pass_in_v (gdbarch, regcache, &info, buf);
+	      pass_in_v (gdbarch, regcache, &info,
+			 buf + TYPE_LENGTH (target_type));
+	    }
+	  else
+	    {
+	      info.nvrn = 8;
+	      pass_on_stack (&info, arg_type, value_contents (arg));
+	    }
+	  break;
+	case TYPE_CODE_FLT:
+	  pass_in_v_or_stack (gdbarch, regcache, &info, arg_type,
+			      value_contents (arg));
+	  break;
+
+	case TYPE_CODE_STRUCT:
+	case TYPE_CODE_ARRAY:
+	case TYPE_CODE_UNION:
+	  if (is_hfa (arg_type))
+	    {
+	      int elements = TYPE_NFIELDS (arg_type);
+	      /* Homogeneous Aggregates */
+	      if (info.nvrn + elements < 8)
+		{
+		  int i;
+		  for (i = 0; i < elements; i++)
+		    {
+		      /* We know that we have sufficient registers
+			 available therefore this will never fallback
+			 to the stack.  */
+		      struct value *field =
+			value_primitive_field (arg, 0, i, arg_type);
+		      struct type *field_type =
+			check_typedef (value_type (field));
+		      pass_in_v_or_stack (gdbarch, regcache, &info, field_type,
+					  value_contents_writeable (field));
+		    }
+		}
+	      else
+		{
+		  info.nvrn = 8;
+		  pass_on_stack (&info, arg_type, value_contents (arg));
+		}
+	    }
+	  else if (len > 16)
+	    {
+	      /* PCS B.7 Aggregates larger than 16 bytes are passed by
+		 invisible reference.  */
+
+	      /* Allocate aligned storage.  */
+	      sp = align_down (sp - len, 16);
+
+	      /* Write the real data into the stack.  */
+	      write_memory (sp, value_contents (arg), len);
+
+	      /* Construct the indirection.  */
+	      arg_type = lookup_pointer_type (arg_type);
+	      arg = value_from_pointer (arg_type, sp);
+	      pass_in_x_or_stack (gdbarch, regcache, &info, arg_type,
+				  value_contents (arg));
+	    }
+	  else
+	    /* PCS C.15 / C.18 multiple values pass.  */
+	    pass_in_x_or_stack (gdbarch, regcache, &info, arg_type,
+				value_contents (arg));
+	  break;
+
+	default:
+	  pass_in_x_or_stack (gdbarch, regcache, &info, arg_type,
+			      value_contents (arg));
+	  break;
+	}
+    }
+
+  /* Make sure stack retains 16 byte alignment.  */
+  if (info.nsaa & 15)
+    sp -= 16 - (info.nsaa & 15);
+
+  while (info.si)
+    {
+      sp -= info.si->len;
+      write_memory (sp, info.si->data, info.si->len);
+      info.si = pop_stack_item (info.si);
+    }
+
+  /* Finally, update the SP register.  */
+  regcache_cooked_write_unsigned (regcache, AARCH64_SP_REGNUM, sp);
+
+  return sp;
+}
+
+/* Always align the frame to a 16-byte boundary.  */
+static CORE_ADDR
+aarch64_frame_align (struct gdbarch *gdbarch, CORE_ADDR sp)
+{
+  /* Align the stack to sixteen bytes.  */
+  return sp & ~(CORE_ADDR) 15;
+}
+
+static struct type *
+aarch64_vnq_type (struct gdbarch *gdbarch)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+  if (tdep->vnq_type == NULL)
+    {
+      struct type *t;
+      struct type *elem;
+
+      t = arch_composite_type (gdbarch, "__gdb_builtin_type_vnq",
+			       TYPE_CODE_UNION);
+
+      elem = builtin_type (gdbarch)->builtin_uint128;
+      append_composite_type_field (t, "u", elem);
+
+      elem = builtin_type (gdbarch)->builtin_int128;
+      append_composite_type_field (t, "s", elem);
+
+      tdep->vnq_type = t;
+    }
+
+  return tdep->vnq_type;
+}
+
+static struct type *
+aarch64_vnd_type (struct gdbarch *gdbarch)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+  if (tdep->vnd_type == NULL)
+    {
+      struct type *t;
+      struct type *elem;
+
+      t = arch_composite_type (gdbarch, "__gdb_builtin_type_vnd",
+			       TYPE_CODE_UNION);
+
+      elem = builtin_type (gdbarch)->builtin_double;
+      append_composite_type_field (t, "f", elem);
+
+      elem = builtin_type (gdbarch)->builtin_uint64;
+      append_composite_type_field (t, "u", elem);
+
+      elem = builtin_type (gdbarch)->builtin_int64;
+      append_composite_type_field (t, "s", elem);
+
+      tdep->vnd_type = t;
+    }
+
+  return tdep->vnd_type;
+}
+
+static struct type *
+aarch64_vns_type (struct gdbarch *gdbarch)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+  if (tdep->vns_type == NULL)
+    {
+      struct type *t;
+      struct type *elem;
+
+      t = arch_composite_type (gdbarch, "__gdb_builtin_type_vns",
+			       TYPE_CODE_UNION);
+
+      elem = builtin_type (gdbarch)->builtin_float;
+      append_composite_type_field (t, "f", elem);
+
+      elem = builtin_type (gdbarch)->builtin_uint32;
+      append_composite_type_field (t, "u", elem);
+
+      elem = builtin_type (gdbarch)->builtin_int32;
+      append_composite_type_field (t, "s", elem);
+
+      tdep->vns_type = t;
+    }
+
+  return tdep->vns_type;
+}
+
+static struct type *
+aarch64_vnh_type (struct gdbarch *gdbarch)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+  if (tdep->vnh_type == NULL)
+    {
+      struct type *t;
+      struct type *elem;
+
+      t = arch_composite_type (gdbarch, "__gdb_builtin_type_vnh",
+			       TYPE_CODE_UNION);
+
+      elem = builtin_type (gdbarch)->builtin_uint16;
+      append_composite_type_field (t, "u", elem);
+
+      elem = builtin_type (gdbarch)->builtin_int16;
+      append_composite_type_field (t, "s", elem);
+
+      tdep->vnh_type = t;
+    }
+
+  return tdep->vnh_type;
+}
+
+static struct type *
+aarch64_vnb_type (struct gdbarch *gdbarch)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+  if (tdep->vnb_type == NULL)
+    {
+      struct type *t;
+      struct type *elem;
+
+      t = arch_composite_type (gdbarch, "__gdb_builtin_type_vnb",
+			       TYPE_CODE_UNION);
+
+      elem = builtin_type (gdbarch)->builtin_uint8;
+      append_composite_type_field (t, "u", elem);
+
+      elem = builtin_type (gdbarch)->builtin_int8;
+      append_composite_type_field (t, "s", elem);
+
+      tdep->vnb_type = t;
+    }
+
+  return tdep->vnb_type;
+}
+
+/* Return the GDB type object for the "standard" data type of data in
+   register N.  */
+
+/* Map a DWARF register REGNUM onto the appropriate GDB register
+   number.  */
+
+static int
+aarch64_dwarf_reg_to_regnum (struct gdbarch *gdbarch, int reg)
+{
+  if (reg >= AARCH64_DWARF_X0 && reg <= AARCH64_DWARF_X0 + 30)
+    return AARCH64_X0_REGNUM + reg - AARCH64_DWARF_X0;
+
+  if (reg == AARCH64_DWARF_SP)
+    return AARCH64_SP_REGNUM;
+
+  if (reg >= AARCH64_DWARF_V0 && reg <= AARCH64_DWARF_V0 + 31)
+    return AARCH64_V0_REGNUM + reg - AARCH64_DWARF_V0;
+
+  return -1;
+}
+
+static int
+condition_true (unsigned cond, uint64_t status_reg)
+{
+  if (cond == INST_AL || cond == INST_NV)
+    return 1;
+
+  switch (cond)
+    {
+    case INST_EQ:
+      return ((status_reg & FLAG_Z) != 0);
+    case INST_NE:
+      return ((status_reg & FLAG_Z) == 0);
+    case INST_CS:
+      return ((status_reg & FLAG_C) != 0);
+    case INST_CC:
+      return ((status_reg & FLAG_C) == 0);
+    case INST_MI:
+      return ((status_reg & FLAG_N) != 0);
+    case INST_PL:
+      return ((status_reg & FLAG_N) == 0);
+    case INST_VS:
+      return ((status_reg & FLAG_V) != 0);
+    case INST_VC:
+      return ((status_reg & FLAG_V) == 0);
+    case INST_HI:
+      return ((status_reg & (FLAG_C | FLAG_Z)) == FLAG_C);
+    case INST_LS:
+      return ((status_reg & (FLAG_C | FLAG_Z)) != FLAG_C);
+    case INST_GE:
+      return (((status_reg & FLAG_N) == 0) == ((status_reg & FLAG_V) == 0));
+    case INST_LT:
+      return (((status_reg & FLAG_N) == 0) != ((status_reg & FLAG_V) == 0));
+    case INST_GT:
+      return (((status_reg & FLAG_Z) == 0) &&
+	      (((status_reg & FLAG_N) == 0) == ((status_reg & FLAG_V) == 0)));
+    case INST_LE:
+      return (((status_reg & FLAG_Z) != 0) ||
+	      (((status_reg & FLAG_N) == 0) != ((status_reg & FLAG_V) == 0)));
+    }
+  return 1;
+}
+
+/* Support routines for single stepping.  Calculate the next PC value.  */
+#define submask(x) ((1L << ((x) + 1)) - 1)
+#define bit(obj,st) (((obj) >> (st)) & 1)
+#define bits(obj,st,fn) (((obj) >> (st)) & submask ((fn) - (st)))
+#define sbits(obj,st,fn) \
+  ((long) (bits (obj,st,fn) | ((long) bit (obj,fn) * ~ submask (fn - st))))
+#define AARCH64_PC_32 1
+
+CORE_ADDR
+aarch64_get_next_pc (struct frame_info * frame, CORE_ADDR pc)
+{
+  struct gdbarch *gdbarch = get_frame_arch (frame);
+  enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch);
+  uint64_t pc_val = (uint64_t) pc;
+  uint32_t insn = read_memory_unsigned_integer (pc, 4, byte_order_for_code);
+  CORE_ADDR next_pc = pc + 4;	/* default is next instruction */
+  unsigned rn;
+  unsigned cond;
+  unsigned link;
+  int32_t offset;
+  int is64;
+  unsigned op;
+  unsigned bit;
+  unsigned is_link;
+
+  if (decode_eret (pc_val, insn))
+    {
+      next_pc = get_frame_register_unsigned (frame, AARCH64_LR_REGNUM);
+    }
+  else if (decode_br (pc_val, insn, &is_link, &rn))
+    {
+      next_pc = get_frame_register_unsigned (frame, rn);
+    }
+  else if (decode_ret (pc_val, insn, &rn))
+    {
+      next_pc = get_frame_register_unsigned (frame, rn);
+    }
+  else if (decode_bcond (pc_val, insn, &cond, &offset))
+    {
+      CORE_ADDR branch_addr;
+      uint64_t cpsr;
+
+      cpsr = get_frame_register_unsigned (frame, AARCH64_CPSR_REGNUM);
+      branch_addr = pc_val + offset;
+
+      if (condition_true (cond, cpsr))
+	next_pc = pc_val + offset;
+    }
+  else if (decode_b (pc_val, insn, &is_link, &offset))
+    {
+      next_pc = pc_val + offset;
+    }
+  else if (decode_cb (pc_val, insn, &is64, &op, &rn, &offset))
+    {
+      CORE_ADDR branch_addr;
+      uint64_t v;
+      int result;
+
+      v = get_frame_register_unsigned (frame, rn);
+
+      if (!is64)
+	v &= 0xffffffff;
+
+      branch_addr = pc_val + offset;
+
+      result = v == 0;
+      if (op)
+	result = !result;
+
+      if (result)
+	next_pc = branch_addr;
+    }
+  else if (decode_tb (pc_val, insn, &op, &bit, &rn, &offset))
+    {
+      CORE_ADDR branch_addr;
+      uint64_t v;
+      int result;
+
+      branch_addr = pc_val + offset;
+      v = get_frame_register_unsigned (frame, rn);
+      result = !((v >> bit) & 1);
+      if (op)
+	result = !result;
+
+      if (result)
+	next_pc = branch_addr;
+    }
+
+  return next_pc;
+}
+
+/* single_step () is called just before we want to resume the inferior,
+   if we want to single-step it but there is no hardware or kernel
+   single-step support.  We find the target of the coming instruction
+   and breakpoint it.  */
+
+int
+aarch64_software_single_step (struct frame_info *frame)
+{
+  CORE_ADDR next_pc;
+  struct gdbarch *gdbarch = get_frame_arch (frame);
+  struct address_space *aspace = get_frame_address_space (frame);
+
+  /* NOTE: This may insert the wrong breakpoint instruction when
+     single-stepping over a mode-changing instruction, if the
+     CPSR heuristics are used.  */
+
+  next_pc = aarch64_get_next_pc (frame, get_frame_pc (frame));
+  insert_single_step_breakpoint (gdbarch, aspace, next_pc);
+  return 1;
+}
+
+#include "bfd-in2.h"
+
+static int
+gdb_print_insn_aarch64 (bfd_vma memaddr, disassemble_info * info)
+{
+  info->symbols = NULL;
+  return print_insn_aarch64 (memaddr, info);
+}
+
+/* AArch64 BRK software debug mode instruction */
+/* 1101.0100.0010.0000.0000.0000.0000.0000 = 0xd4200000 */
+
+#define AARCH64_LE_BREAKPOINT {0x00,0x00,0x20,0xd4}
+static const char aarch64_default_le_breakpoint[] = AARCH64_LE_BREAKPOINT;
+
+/* Determine the type and size of breakpoint to insert at PCPTR.  Uses
+   the program counter value to determine whether a 16-bit or 32-bit
+   breakpoint should be used.  It returns a pointer to a string of
+   bytes that encode a breakpoint instruction, stores the length of
+   the string to *lenptr, and adjusts the program counter (if
+   necessary) to point to the actual memory location where the
+   breakpoint should be inserted.  */
+static const unsigned char *
+aarch64_breakpoint_from_pc (struct gdbarch *gdbarch, CORE_ADDR * pcptr,
+			    int *lenptr)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+  *lenptr = tdep->aarch64_breakpoint_size;
+  return tdep->aarch64_breakpoint;
+}
+
+/* Extract from an array REGBUF containing the (raw) register state a
+   function return value of type TYPE, and copy that, in virtual
+   format, into VALBUF.  */
+static void
+aarch64_extract_return_value (struct type *type, struct regcache *regs,
+			      gdb_byte * valbuf)
+{
+  struct gdbarch *gdbarch = get_regcache_arch (regs);
+  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+  if (TYPE_CODE (type) == TYPE_CODE_FLT)
+    {
+      bfd_byte buf[V_REGISTER_SIZE];
+      int len = TYPE_LENGTH (type);
+      regcache_cooked_read (regs, AARCH64_V0_REGNUM, buf);
+      memcpy (valbuf, buf, len);
+    }
+  else if (TYPE_CODE (type) == TYPE_CODE_INT
+	   || TYPE_CODE (type) == TYPE_CODE_CHAR
+	   || TYPE_CODE (type) == TYPE_CODE_BOOL
+	   || TYPE_CODE (type) == TYPE_CODE_PTR
+	   || TYPE_CODE (type) == TYPE_CODE_REF
+	   || TYPE_CODE (type) == TYPE_CODE_ENUM)
+    {
+      /* If the the type is a plain integer, then the access is
+         straight-forward.  Otherwise we have to play around a bit more.  */
+      int len = TYPE_LENGTH (type);
+      int regno = AARCH64_X0_REGNUM;
+      ULONGEST tmp;
+
+      while (len > 0)
+	{
+	  /* By using store_unsigned_integer we avoid having to do
+	     anything special for small big-endian values.  */
+	  regcache_cooked_read_unsigned (regs, regno++, &tmp);
+	  store_unsigned_integer (valbuf,
+				  (len > X_REGISTER_SIZE
+				   ? X_REGISTER_SIZE : len), byte_order, tmp);
+	  len -= X_REGISTER_SIZE;
+	  valbuf += X_REGISTER_SIZE;
+	}
+    }
+  else if (TYPE_CODE (type) == TYPE_CODE_COMPLEX)
+    {
+      int regno = AARCH64_V0_REGNUM;
+      bfd_byte buf[V_REGISTER_SIZE];
+      struct type *target_type = check_typedef (TYPE_TARGET_TYPE (type));
+      int len = TYPE_LENGTH (target_type);
+      regcache_cooked_read (regs, regno, buf);
+      memcpy (valbuf, buf, len);
+      valbuf += len;
+      regcache_cooked_read (regs, regno + 1, buf);
+      memcpy (valbuf, buf, len);
+      valbuf += len;
+    }
+  else if (is_hfa (type))
+    {
+      int elements = TYPE_NFIELDS (type);
+      struct type *member_type = check_typedef (TYPE_FIELD_TYPE (type, 0));
+      int len = TYPE_LENGTH (member_type);
+      int i;
+
+      for (i = 0; i < elements; i++)
+	{
+	  int regno = AARCH64_V0_REGNUM + i;
+	  bfd_byte buf[X_REGISTER_SIZE];
+
+	  if (aarch64_debug)
+	    fprintf_unfiltered (gdb_stdlog,
+				"read HFA return value element %d from %s\n",
+				i + 1,
+				gdbarch_register_name (gdbarch, regno));
+	  regcache_cooked_read (regs, regno, buf);
+
+	  memcpy (valbuf, buf, len);
+	  valbuf += len;
+	}
+    }
+  else
+    {
+      /* For a structure or union the behaviour is as if the value had
+         been stored to word-aligned memory and then loaded into
+         registers with 64-bit load instruction(s).  */
+      int len = TYPE_LENGTH (type);
+      int regno = AARCH64_X0_REGNUM;
+      bfd_byte buf[X_REGISTER_SIZE];
+
+      while (len > 0)
+	{
+	  regcache_cooked_read (regs, regno++, buf);
+	  memcpy (valbuf, buf, len > X_REGISTER_SIZE ? X_REGISTER_SIZE : len);
+	  len -= X_REGISTER_SIZE;
+	  valbuf += X_REGISTER_SIZE;
+	}
+    }
+}
+
+
+/* Will a function return an aggregate type in memory or in a
+   register?  Return 0 if an aggregate type can be returned in a
+   register, 1 if it must be returned in memory.  */
+static int
+aarch64_return_in_memory (struct gdbarch *gdbarch, struct type *type)
+{
+  int nRc;
+  enum type_code code;
+
+  CHECK_TYPEDEF (type);
+
+  /* In the AArch64 ABI, "integer" like aggregate types are returned in
+     registers.  For an aggregate type to be integer like, its size
+     must be less than or equal to 4 * X_REGISTER_SIZE.  */
+
+  if (is_hfa (type))
+    {
+      /* PCS B.5 If the argument is a Named HFA, then the argument is
+         used unmodified.  */
+      return 0;
+    }
+
+  if (TYPE_LENGTH (type) > 16)
+    {
+      /* PCS B.6 Aggregates larger than 16 bytes are passed by
+         invisible reference.  */
+
+      return 1;
+    }
+
+  return 0;
+}
+
+/* Write into appropriate registers a function return value of type
+   TYPE, given in virtual format.  */
+static void
+aarch64_store_return_value (struct type *type, struct regcache *regs,
+			    const gdb_byte * valbuf)
+{
+  struct gdbarch *gdbarch = get_regcache_arch (regs);
+  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+  if (TYPE_CODE (type) == TYPE_CODE_FLT)
+    {
+      bfd_byte buf[V_REGISTER_SIZE];
+      int len = TYPE_LENGTH (type);
+      memcpy (buf, valbuf, len > V_REGISTER_SIZE ? V_REGISTER_SIZE : len);
+      regcache_cooked_write (regs, AARCH64_V0_REGNUM, buf);
+    }
+  else if (TYPE_CODE (type) == TYPE_CODE_INT
+	   || TYPE_CODE (type) == TYPE_CODE_CHAR
+	   || TYPE_CODE (type) == TYPE_CODE_BOOL
+	   || TYPE_CODE (type) == TYPE_CODE_PTR
+	   || TYPE_CODE (type) == TYPE_CODE_REF
+	   || TYPE_CODE (type) == TYPE_CODE_ENUM)
+    {
+      if (TYPE_LENGTH (type) <= X_REGISTER_SIZE)
+	{
+	  /* Values of one word or less are zero/sign-extended and
+	     returned in r0.  */
+	  bfd_byte tmpbuf[X_REGISTER_SIZE];
+	  LONGEST val = unpack_long (type, valbuf);
+
+	  store_signed_integer (tmpbuf, X_REGISTER_SIZE, byte_order, val);
+	  regcache_cooked_write (regs, AARCH64_X0_REGNUM, tmpbuf);
+	}
+      else
+	{
+	  /* Integral values greater than one word are stored in consecutive
+	     registers starting with r0.  This will always be a multiple of
+	     the regiser size.  */
+	  int len = TYPE_LENGTH (type);
+	  int regno = AARCH64_X0_REGNUM;
+
+	  while (len > 0)
+	    {
+	      regcache_cooked_write (regs, regno++, valbuf);
+	      len -= X_REGISTER_SIZE;
+	      valbuf += X_REGISTER_SIZE;
+	    }
+	}
+    }
+  else if (is_hfa (type))
+    {
+      int elements = TYPE_NFIELDS (type);
+      struct type *member_type = check_typedef (TYPE_FIELD_TYPE (type, 0));
+      int len = TYPE_LENGTH (member_type);
+      int i;
+
+      for (i = 0; i < elements; i++)
+	{
+	  int regno = AARCH64_V0_REGNUM + i;
+	  bfd_byte tmpbuf[MAX_REGISTER_SIZE];
+
+	  if (aarch64_debug)
+	    fprintf_unfiltered (gdb_stdlog,
+				"write HFA return value element %d to %s\n",
+				i + 1,
+				gdbarch_register_name (gdbarch, regno));
+
+	  memcpy (tmpbuf, valbuf, len);
+	  regcache_cooked_write (regs, regno, tmpbuf);
+	  valbuf += len;
+	}
+    }
+  else
+    {
+      /* For a structure or union the behaviour is as if the value had
+         been stored to word-aligned memory and then loaded into
+         registers with 64-bit load instruction(s).  */
+      int len = TYPE_LENGTH (type);
+      int regno = AARCH64_X0_REGNUM;
+      bfd_byte tmpbuf[X_REGISTER_SIZE];
+
+      while (len > 0)
+	{
+	  memcpy (tmpbuf, valbuf,
+		  len > X_REGISTER_SIZE ? X_REGISTER_SIZE : len);
+	  regcache_cooked_write (regs, regno++, tmpbuf);
+	  len -= X_REGISTER_SIZE;
+	  valbuf += X_REGISTER_SIZE;
+	}
+    }
+}
+
+
+/* Handle function return values.  */
+
+static enum return_value_convention
+aarch64_return_value (struct gdbarch *gdbarch, struct value *func_value,
+		      struct type *valtype, struct regcache *regcache,
+		      gdb_byte * readbuf, const gdb_byte * writebuf)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+  if (TYPE_CODE (valtype) == TYPE_CODE_STRUCT
+      || TYPE_CODE (valtype) == TYPE_CODE_UNION
+      || TYPE_CODE (valtype) == TYPE_CODE_ARRAY)
+    {
+      if (aarch64_return_in_memory (gdbarch, valtype))
+	{
+	  if (aarch64_debug)
+	    fprintf_unfiltered (gdb_stdlog, "return value in memory\n");
+	  return RETURN_VALUE_STRUCT_CONVENTION;
+	}
+    }
+
+  if (writebuf)
+    aarch64_store_return_value (valtype, regcache, writebuf);
+
+  if (readbuf)
+    aarch64_extract_return_value (valtype, regcache, readbuf);
+
+  if (aarch64_debug)
+    fprintf_unfiltered (gdb_stdlog, "return value in registers\n");
+
+  return RETURN_VALUE_REGISTER_CONVENTION;
+}
+
+static int
+aarch64_get_longjmp_target (struct frame_info *frame, CORE_ADDR * pc)
+{
+  CORE_ADDR jb_addr;
+  char buf[X_REGISTER_SIZE];
+  struct gdbarch *gdbarch = get_frame_arch (frame);
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+  jb_addr = get_frame_register_unsigned (frame, AARCH64_X0_REGNUM);
+
+  if (target_read_memory (jb_addr + tdep->jb_pc * tdep->jb_elt_size, buf,
+			  X_REGISTER_SIZE))
+    return 0;
+
+  *pc = extract_unsigned_integer (buf, X_REGISTER_SIZE, byte_order);
+  return 1;
+}
+
+/* Return the pseudo register name corresponding to register regnum.  */
+static const char *
+aarch64_pseudo_register_name (struct gdbarch *gdbarch, int regnum)
+{
+  static const char *const q_name[] = {
+    "q0", "q1", "q2", "q3",
+    "q4", "q5", "q6", "q7",
+    "q8", "q9", "q10", "q11",
+    "q12", "q13", "q14", "q15",
+    "q16", "q17", "q18", "q19",
+    "q20", "q21", "q22", "q23",
+    "q24", "q25", "q26", "q27",
+    "q28", "q29", "q30", "q31",
+  };
+
+  static const char *const d_name[] = {
+    "d0", "d1", "d2", "d3",
+    "d4", "d5", "d6", "d7",
+    "d8", "d9", "d10", "d11",
+    "d12", "d13", "d14", "d15",
+    "d16", "d17", "d18", "d19",
+    "d20", "d21", "d22", "d23",
+    "d24", "d25", "d26", "d27",
+    "d28", "d29", "d30", "d31",
+  };
+
+  static const char *const s_name[] = {
+    "s0", "s1", "s2", "s3",
+    "s4", "s5", "s6", "s7",
+    "s8", "s9", "s10", "s11",
+    "s12", "s13", "s14", "s15",
+    "s16", "s17", "s18", "s19",
+    "s20", "s21", "s22", "s23",
+    "s24", "s25", "s26", "s27",
+    "s28", "s29", "s30", "s31",
+  };
+
+  static const char *const h_name[] = {
+    "h0", "h1", "h2", "h3",
+    "h4", "h5", "h6", "h7",
+    "h8", "h9", "h10", "h11",
+    "h12", "h13", "h14", "h15",
+    "h16", "h17", "h18", "h19",
+    "h20", "h21", "h22", "h23",
+    "h24", "h25", "h26", "h27",
+    "h28", "h29", "h30", "h31",
+  };
+
+  static const char *const b_name[] = {
+    "b0", "b1", "b2", "b3",
+    "b4", "b5", "b6", "b7",
+    "b8", "b9", "b10", "b11",
+    "b12", "b13", "b14", "b15",
+    "b16", "b17", "b18", "b19",
+    "b20", "b21", "b22", "b23",
+    "b24", "b25", "b26", "b27",
+    "b28", "b29", "b30", "b31",
+  };
+
+  regnum -= gdbarch_num_regs (gdbarch);
+
+  if (regnum >= AARCH64_Q0_REGNUM && regnum < AARCH64_Q0_REGNUM + 32)
+    return q_name[regnum - AARCH64_Q0_REGNUM];
+
+  if (regnum >= AARCH64_D0_REGNUM && regnum < AARCH64_D0_REGNUM + 32)
+    return d_name[regnum - AARCH64_D0_REGNUM];
+
+  if (regnum >= AARCH64_S0_REGNUM && regnum < AARCH64_S0_REGNUM + 32)
+    return s_name[regnum - AARCH64_S0_REGNUM];
+
+  if (regnum >= AARCH64_H0_REGNUM && regnum < AARCH64_H0_REGNUM + 32)
+    return h_name[regnum - AARCH64_H0_REGNUM];
+
+  if (regnum >= AARCH64_B0_REGNUM && regnum < AARCH64_B0_REGNUM + 32)
+    return b_name[regnum - AARCH64_B0_REGNUM];
+
+  internal_error (__FILE__, __LINE__,
+		  _("aarch64_pseudo_register_name: bad register number %d"),
+		  regnum);
+}
+
+static struct type *
+aarch64_pseudo_register_type (struct gdbarch *gdbarch, int regnum)
+{
+  regnum -= gdbarch_num_regs (gdbarch);
+
+  if (regnum >= AARCH64_Q0_REGNUM && regnum < AARCH64_Q0_REGNUM + 32)
+    return aarch64_vnq_type (gdbarch);
+
+  if (regnum >= AARCH64_D0_REGNUM && regnum < AARCH64_D0_REGNUM + 32)
+    return aarch64_vnd_type (gdbarch);
+
+  if (regnum >= AARCH64_S0_REGNUM && regnum < AARCH64_S0_REGNUM + 32)
+    return aarch64_vns_type (gdbarch);
+
+  if (regnum >= AARCH64_H0_REGNUM && regnum < AARCH64_H0_REGNUM + 32)
+    return aarch64_vnh_type (gdbarch);
+
+  if (regnum >= AARCH64_B0_REGNUM && regnum < AARCH64_B0_REGNUM + 32)
+    return aarch64_vnb_type (gdbarch);
+
+  internal_error (__FILE__, __LINE__,
+		  _("aarch64_pseudo_register_type: bad register number %d"),
+		  regnum);
+}
+
+static int
+aarch64_pseudo_register_reggroup_p (struct gdbarch *gdbarch, int regnum,
+				    struct reggroup *group)
+{
+  regnum -= gdbarch_num_regs (gdbarch);
+
+  if (regnum >= AARCH64_Q0_REGNUM && regnum < AARCH64_Q0_REGNUM + 32)
+    return group == all_reggroup || group == vector_reggroup;
+  else if (regnum >= AARCH64_D0_REGNUM && regnum < AARCH64_D0_REGNUM + 32)
+    return group == all_reggroup || group == vector_reggroup
+      || group == float_reggroup;
+  else if (regnum >= AARCH64_S0_REGNUM && regnum < AARCH64_S0_REGNUM + 32)
+    return group == all_reggroup || group == vector_reggroup
+      || group == float_reggroup;
+  else if (regnum >= AARCH64_H0_REGNUM && regnum < AARCH64_H0_REGNUM + 32)
+    return group == all_reggroup || group == vector_reggroup;
+  else if (regnum >= AARCH64_B0_REGNUM && regnum < AARCH64_B0_REGNUM + 32)
+    return group == all_reggroup || group == vector_reggroup;
+
+  return group == all_reggroup;
+}
+
+static enum register_status
+aarch64_pseudo_read (struct gdbarch *gdbarch, struct regcache *regcache,
+		     int regnum, gdb_byte * buf)
+{
+  gdb_byte reg_buf[MAX_REGISTER_SIZE];
+
+  regnum -= gdbarch_num_regs (gdbarch);
+
+  if (regnum >= AARCH64_Q0_REGNUM && regnum < AARCH64_Q0_REGNUM + 32)
+    {
+      enum register_status status;
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_Q0_REGNUM;
+      status = regcache_raw_read (regcache, v_regnum, reg_buf);
+      if (status != REG_VALID)
+	return status;
+      memcpy (buf, reg_buf, Q_REGISTER_SIZE);
+      return REG_VALID;
+    }
+
+  if (regnum >= AARCH64_D0_REGNUM && regnum < AARCH64_D0_REGNUM + 32)
+    {
+      enum register_status status;
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_D0_REGNUM;
+      status = regcache_raw_read (regcache, v_regnum, reg_buf);
+      if (status != REG_VALID)
+	return status;
+      memcpy (buf, reg_buf, D_REGISTER_SIZE);
+      return REG_VALID;
+    }
+
+  if (regnum >= AARCH64_S0_REGNUM && regnum < AARCH64_S0_REGNUM + 32)
+    {
+      enum register_status status;
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_S0_REGNUM;
+      status = regcache_raw_read (regcache, v_regnum, reg_buf);
+      memcpy (buf, reg_buf, S_REGISTER_SIZE);
+      return REG_VALID;
+    }
+
+  if (regnum >= AARCH64_H0_REGNUM && regnum < AARCH64_H0_REGNUM + 32)
+    {
+      enum register_status status;
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_H0_REGNUM;
+      status = regcache_raw_read (regcache, v_regnum, reg_buf);
+      if (status != REG_VALID)
+	return status;
+      memcpy (buf, reg_buf, H_REGISTER_SIZE);
+      return REG_VALID;
+    }
+
+  if (regnum >= AARCH64_B0_REGNUM && regnum < AARCH64_B0_REGNUM + 32)
+    {
+      enum register_status status;
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_B0_REGNUM;
+      status = regcache_raw_read (regcache, v_regnum, reg_buf);
+      if (status != REG_VALID)
+	return status;
+      memcpy (buf, reg_buf, B_REGISTER_SIZE);
+      return REG_VALID;
+    }
+
+  gdb_assert (0);
+  return REG_UNAVAILABLE;
+}
+
+static void
+aarch64_pseudo_write (struct gdbarch *gdbarch, struct regcache *regcache,
+		      int regnum, const gdb_byte * buf)
+{
+  gdb_byte reg_buf[MAX_REGISTER_SIZE];
+
+  /* Ensure the register buffer is zero, we want gdb writes of the
+     various 'scalar' pseudo registers to behavior like architectural
+     writes, register width bytes are written the remainder are set to
+     zero.  */
+  memset (reg_buf, 0, sizeof (reg_buf));
+
+  regnum -= gdbarch_num_regs (gdbarch);
+
+  if (regnum >= AARCH64_Q0_REGNUM && regnum < AARCH64_Q0_REGNUM + 32)
+    {
+      /* pseudo Q registers */
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_Q0_REGNUM;
+      memcpy (reg_buf, buf, Q_REGISTER_SIZE);
+      regcache_raw_write (regcache, v_regnum, reg_buf);
+      return;
+    }
+
+  if (regnum >= AARCH64_D0_REGNUM && regnum < AARCH64_D0_REGNUM + 32)
+    {
+      /* pseudo D registers */
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_D0_REGNUM;
+      memcpy (reg_buf, buf, D_REGISTER_SIZE);
+      regcache_raw_write (regcache, v_regnum, reg_buf);
+      return;
+    }
+
+  if (regnum >= AARCH64_S0_REGNUM && regnum < AARCH64_S0_REGNUM + 32)
+    {
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_S0_REGNUM;
+      memcpy (reg_buf, buf, S_REGISTER_SIZE);
+      regcache_raw_write (regcache, v_regnum, reg_buf);
+      return;
+    }
+
+  if (regnum >= AARCH64_H0_REGNUM && regnum < AARCH64_H0_REGNUM + 32)
+    {
+      /* pseudo H registers */
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_H0_REGNUM;
+      memcpy (reg_buf, buf, H_REGISTER_SIZE);
+      regcache_raw_write (regcache, v_regnum, reg_buf);
+      return;
+    }
+
+  if (regnum >= AARCH64_B0_REGNUM && regnum < AARCH64_B0_REGNUM + 32)
+    {
+      /* pseudo B registers */
+      unsigned v_regnum;
+
+      v_regnum = AARCH64_V0_REGNUM + regnum - AARCH64_B0_REGNUM;
+      memcpy (reg_buf, buf, B_REGISTER_SIZE);
+      regcache_raw_write (regcache, v_regnum, reg_buf);
+      return;
+    }
+
+  gdb_assert (0);
+}
+
+static void
+aarch64_write_pc (struct regcache *regcache, CORE_ADDR pc)
+{
+  regcache_cooked_write_unsigned (regcache, AARCH64_PC_REGNUM, pc);
+}
+
+static struct value *
+value_of_aarch64_user_reg (struct frame_info *frame, const void *baton)
+{
+  const int *reg_p = baton;
+  return value_of_register (*reg_p, frame);
+}
+
+
+/* Initialize the current architecture based on INFO.  If possible,
+   re-use an architecture from ARCHES, which is a list of
+   architectures already created during this debugging session.
+
+   Called e.g. at program startup, when reading a core file, and when
+   reading a binary file.  */
+static struct gdbarch *
+aarch64_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches)
+{
+  struct gdbarch_tdep *tdep;
+  struct gdbarch *gdbarch;
+  struct gdbarch_list *best_arch;
+  struct tdesc_arch_data *tdesc_data = NULL;
+  const struct target_desc *tdesc = info.target_desc;
+  int i;
+  int have_fpa_registers = 1;
+  int valid_p = 1;
+  const struct tdesc_feature *feature;
+  int num_regs = 0;
+  int num_pseudo_regs = 0;
+
+  /* Ensure we always have a target descriptor.  */
+  if (!tdesc_has_registers (tdesc))
+    {
+      tdesc = tdesc_aarch64;
+    }
+
+  gdb_assert (tdesc);
+
+
+  feature = tdesc_find_feature (tdesc, "org.gnu.gdb.aarch64.core");
+
+  if (feature == NULL)
+    return NULL;
+
+  tdesc_data = tdesc_data_alloc ();
+
+  /* Validate the descriptor provides the mandatory core R registers
+     and allocate their numbers.  */
+  for (i = 0; i < ARRAY_SIZE (aarch64_r_register_names); i++)
+    valid_p &=
+      tdesc_numbered_register (feature, tdesc_data, AARCH64_X0_REGNUM + i,
+			       aarch64_r_register_names[i]);
+
+  num_regs = AARCH64_X0_REGNUM + i;
+
+  /* Look for the V registers.  */
+  feature = tdesc_find_feature (tdesc, "org.gnu.gdb.aarch64.fpu");
+  if (feature)
+    {
+      /* Validate the descriptor provides the mandatory V registers
+         and allocate their numbers.  */
+      for (i = 0; i < ARRAY_SIZE (aarch64_v_register_names); i++)
+	valid_p &=
+	  tdesc_numbered_register (feature, tdesc_data, AARCH64_V0_REGNUM + i,
+				   aarch64_v_register_names[i]);
+
+      num_regs = AARCH64_V0_REGNUM + i;
+
+      num_pseudo_regs += 32;	/* add the Qn scalar register pseudos */
+      num_pseudo_regs += 32;	/* add the Dn scalar register pseudos */
+      num_pseudo_regs += 32;	/* add the Sn scalar register pseudos */
+      num_pseudo_regs += 32;	/* add the Hn scalar register pseudos */
+      num_pseudo_regs += 32;	/* add the Bn scalar register pseudos */
+    }
+
+  if (!valid_p)
+    {
+      tdesc_data_cleanup (tdesc_data);
+      return NULL;
+    }
+
+  /* AArch64 code is always little-endian.  */
+  info.byte_order_for_code = BFD_ENDIAN_LITTLE;
+
+  /* If there is already a candidate, use it.  */
+  for (best_arch = gdbarch_list_lookup_by_info (arches, &info);
+       best_arch != NULL;
+       best_arch = gdbarch_list_lookup_by_info (best_arch->next, &info))
+    {
+      /* Found a match.  */
+      break;
+    }
+
+  if (best_arch != NULL)
+    {
+      if (tdesc_data != NULL)
+	tdesc_data_cleanup (tdesc_data);
+      return best_arch->gdbarch;
+    }
+
+  tdep = xcalloc (1, sizeof (struct gdbarch_tdep));
+  gdbarch = gdbarch_alloc (&info, tdep);
+
+  /* AArch64 code is always little-endian.  */
+  tdep->aarch64_breakpoint = aarch64_default_le_breakpoint;
+  tdep->aarch64_breakpoint_size = sizeof (aarch64_default_le_breakpoint);
+
+  /* This should be low enough for everything.  */
+  tdep->lowest_pc = 0x20;
+  tdep->jb_pc = -1;		/* Longjump support not enabled by default.  */
+  tdep->jb_elt_size = 8;
+
+  set_gdbarch_push_dummy_call (gdbarch, aarch64_push_dummy_call);
+  set_gdbarch_frame_align (gdbarch, aarch64_frame_align);
+
+  set_gdbarch_write_pc (gdbarch, aarch64_write_pc);
+
+  /* Frame handling.  */
+  set_gdbarch_dummy_id (gdbarch, aarch64_dummy_id);
+  set_gdbarch_unwind_pc (gdbarch, aarch64_unwind_pc);
+  set_gdbarch_unwind_sp (gdbarch, aarch64_unwind_sp);
+
+  /* Address manipulation.  */
+  set_gdbarch_addr_bits_remove (gdbarch, aarch64_addr_bits_remove);
+
+  /* Advance PC across function entry code.  */
+  set_gdbarch_skip_prologue (gdbarch, aarch64_skip_prologue);
+
+  /* The stack grows downward.  */
+  set_gdbarch_inner_than (gdbarch, core_addr_lessthan);
+
+  /* Breakpoint manipulation.  */
+  set_gdbarch_breakpoint_from_pc (gdbarch, aarch64_breakpoint_from_pc);
+  set_gdbarch_cannot_step_breakpoint (gdbarch, 1);
+  set_gdbarch_have_nonsteppable_watchpoint (gdbarch, 1);
+
+  /* Information about registers, etc.  */
+  set_gdbarch_sp_regnum (gdbarch, AARCH64_SP_REGNUM);
+  set_gdbarch_pc_regnum (gdbarch, AARCH64_PC_REGNUM);
+  set_gdbarch_num_regs (gdbarch, num_regs);
+
+  set_gdbarch_num_pseudo_regs (gdbarch, num_pseudo_regs);
+  set_gdbarch_pseudo_register_read (gdbarch, aarch64_pseudo_read);
+  set_gdbarch_pseudo_register_write (gdbarch, aarch64_pseudo_write);
+  set_tdesc_pseudo_register_name (gdbarch, aarch64_pseudo_register_name);
+  set_tdesc_pseudo_register_type (gdbarch, aarch64_pseudo_register_type);
+  set_tdesc_pseudo_register_reggroup_p (gdbarch,
+					aarch64_pseudo_register_reggroup_p);
+
+  /* ABI */
+  set_gdbarch_short_bit (gdbarch, 16);
+  set_gdbarch_int_bit (gdbarch, 32);
+  set_gdbarch_float_bit (gdbarch, 32);
+  set_gdbarch_double_bit (gdbarch, 64);
+  set_gdbarch_long_double_bit (gdbarch, 128);
+  set_gdbarch_long_bit (gdbarch, 64);
+  set_gdbarch_long_long_bit (gdbarch, 64);
+  set_gdbarch_ptr_bit (gdbarch, 64);
+  set_gdbarch_char_signed (gdbarch, 0);
+  set_gdbarch_float_format (gdbarch, floatformats_ieee_single);
+  set_gdbarch_double_format (gdbarch, floatformats_ieee_double);
+  set_gdbarch_long_double_format (gdbarch, floatformats_ia64_quad);
+
+  /* Internal <-> external register number maps.  */
+  set_gdbarch_dwarf2_reg_to_regnum (gdbarch, aarch64_dwarf_reg_to_regnum);
+
+  /* Returning results.  */
+  set_gdbarch_return_value (gdbarch, aarch64_return_value);
+
+  /* Disassembly.  */
+  set_gdbarch_print_insn (gdbarch, gdb_print_insn_aarch64);
+
+  /* Virtual tables.  */
+  set_gdbarch_vbit_in_delta (gdbarch, 1);
+
+  /* Hook in the ABI-specific overrides, if they have been registered.  */
+  info.target_desc = tdesc;
+  info.tdep_info = (void *) tdesc_data;
+  gdbarch_init_osabi (info, gdbarch);
+
+  dwarf2_frame_set_init_reg (gdbarch, aarch64_dwarf2_frame_init_reg);
+
+  /* Add some default predicates.  */
+  frame_unwind_append_unwinder (gdbarch, &aarch64_stub_unwind);
+  dwarf2_append_unwinders (gdbarch);
+  frame_unwind_append_unwinder (gdbarch, &aarch64_prologue_unwind);
+
+  /*frame_base_append_sniffer (gdbarch, dwarf2_frame_base_sniffer); */
+  frame_base_set_default (gdbarch, &aarch64_normal_base);
+
+  /* Now we have tuned the configuration, set a few final things,
+     based on what the OS ABI has told us.  */
+
+  if (tdep->jb_pc >= 0)
+    set_gdbarch_get_longjmp_target (gdbarch, aarch64_get_longjmp_target);
+
+  tdesc_use_registers (gdbarch, tdesc, tdesc_data);
+
+  /* Add standard register aliases.  */
+  for (i = 0; i < ARRAY_SIZE (aarch64_register_aliases); i++)
+    user_reg_add (gdbarch, aarch64_register_aliases[i].name,
+		  value_of_aarch64_user_reg,
+		  &aarch64_register_aliases[i].regnum);
+
+  return gdbarch;
+}
+
+static void
+aarch64_dump_tdep (struct gdbarch *gdbarch, struct ui_file *file)
+{
+  struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+  if (tdep == NULL)
+    return;
+
+  fprintf_unfiltered (file, _("aarch64_dump_tdep: Lowest pc = 0x%s"),
+		      paddress (gdbarch, tdep->lowest_pc));
+}
+
+/* Suppress warning from -Wmissing-prototypes.  */
+extern initialize_file_ftype _initialize_aarch64_tdep;
+
+void
+_initialize_aarch64_tdep (void)
+{
+  struct cmd_list_element *new_set, *new_show;
+  const char *setname;
+  const char *setdesc;
+
+  gdbarch_register (bfd_arch_aarch64, aarch64_gdbarch_init,
+		    aarch64_dump_tdep);
+
+  initialize_tdesc_aarch64 ();
+  initialize_tdesc_aarch64_without_fpu ();
+
+  /* Debug this file's internals.  */
+  add_setshow_zinteger_cmd ("aarch64", class_maintenance, &aarch64_debug, _("\
+Set AArch64 debugging."), _("\
+Show AArch64 debugging."), _("\
+When non-zero, AArch64 specific debugging is enabled."),
+			    NULL,
+			    show_aarch64_debug,
+			    &setdebuglist, &showdebuglist);
+}
diff --git a/gdb/aarch64-tdep.h b/gdb/aarch64-tdep.h
new file mode 100644
index 0000000..7a99168
--- /dev/null
+++ b/gdb/aarch64-tdep.h
@@ -0,0 +1,128 @@
+/* Common target dependent code for GDB on AArch64 systems.
+
+   Copyright (C) 2009-2012 Free Software Foundation, Inc.
+   Contributed by ARM Ltd.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+
+#ifndef AARCH64_TDEP_H
+#define AARCH64_TDEP_H
+
+/* Forward declarations.  */
+struct gdbarch;
+struct regset;
+
+/* AArch64 Dwarf register numbering.  */
+
+#define AARCH64_DWARF_X0   0
+#define AARCH64_DWARF_SP  31
+#define AARCH64_DWARF_V0  64
+
+/* Register numbers of various important registers.  */
+
+enum gdb_regnum
+{
+  AARCH64_X0_REGNUM,		/* First integer register */
+
+  /* Frame register in AArch64 code, if used.  */
+  AARCH64_FP_REGNUM = AARCH64_X0_REGNUM + 29,
+  AARCH64_LR_REGNUM = AARCH64_X0_REGNUM + 30,	/* Return address */
+  AARCH64_SP_REGNUM,		/* Stack pointer */
+  AARCH64_PC_REGNUM,		/* Program counter */
+  AARCH64_CPSR_REGNUM,		/* Contains status register */
+  AARCH64_V0_REGNUM,		/* First floating point / vector register */
+
+  /* Last floating point / vector register */
+  AARCH64_V31_REGNUM = AARCH64_V0_REGNUM + 31,
+  AARCH64_FPSR_REGNUM,		/* Floating point status register */
+  AARCH64_FPCR_REGNUM,		/* Floating point control register */
+
+  /* Other useful registers.  */
+
+  /* Last integer-like argument */
+  AARCH64_LAST_X_ARG_REGNUM = AARCH64_X0_REGNUM + 7,
+  AARCH64_STRUCT_RETURN_REGNUM = AARCH64_X0_REGNUM + 8,
+  AARCH64_LAST_V_ARG_REGNUM = AARCH64_V0_REGNUM + 7
+};
+
+/* Size of integer registers.  */
+#define X_REGISTER_SIZE	 8
+#define B_REGISTER_SIZE  1
+#define H_REGISTER_SIZE  2
+#define S_REGISTER_SIZE  4
+#define D_REGISTER_SIZE  8
+#define V_REGISTER_SIZE 16
+#define Q_REGISTER_SIZE 16
+
+/* Instruction condition field values.  */
+#define INST_EQ		0x0
+#define INST_NE		0x1
+#define INST_CS		0x2
+#define INST_CC		0x3
+#define INST_MI		0x4
+#define INST_PL		0x5
+#define INST_VS		0x6
+#define INST_VC		0x7
+#define INST_HI		0x8
+#define INST_LS		0x9
+#define INST_GE		0xa
+#define INST_LT		0xb
+#define INST_GT		0xc
+#define INST_LE		0xd
+#define INST_AL		0xe
+#define INST_NV		0xf
+
+#define FLAG_N		0x80000000
+#define FLAG_Z		0x40000000
+#define FLAG_C		0x20000000
+#define FLAG_V		0x10000000
+
+
+/* Target-dependent structure in gdbarch.  */
+struct gdbarch_tdep
+{
+  CORE_ADDR lowest_pc;		/* Lowest address at which instructions
+				   will appear.  */
+
+  /* Breakpoint pattern for an AArch64 insn.  */
+  const char *aarch64_breakpoint;
+
+  /* And its size.  */
+  int aarch64_breakpoint_size;
+
+  int jb_pc;			/* Offset to PC value in jump buffer.
+				   If this is negative, longjmp support
+				   will be disabled.  */
+  size_t jb_elt_size;		/* And the size of each entry in the buf.  */
+
+  /* Cached core file helpers.  */
+  struct regset *gregset;
+  struct regset *fpregset;
+
+  struct type *vnq_type;
+  struct type *vnd_type;
+  struct type *vns_type;
+  struct type *vnh_type;
+  struct type *vnb_type;
+};
+
+
+CORE_ADDR aarch64_skip_stub (struct frame_info *, CORE_ADDR);
+CORE_ADDR aarch64_get_next_pc (struct frame_info *, CORE_ADDR);
+int aarch64_software_single_step (struct frame_info *);
+
+#endif /* aarch64-tdep.h */
diff --git a/gdb/config/aarch64/linux.mh b/gdb/config/aarch64/linux.mh
new file mode 100644
index 0000000..bc119c3
--- /dev/null
+++ b/gdb/config/aarch64/linux.mh
@@ -0,0 +1,9 @@
+# Host: AArch64 based machine running GNU/Linux
+
+NAT_FILE= config/nm-linux.h
+NATDEPFILES= inf-ptrace.o fork-child.o aarch64-linux-nat.o \
+	proc-service.o linux-thread-db.o linux-nat.o linux-fork.o \
+	linux-procfs.o linux-ptrace.o linux-osdata.o
+NAT_CDEPS = $(srcdir)/proc-service.list
+
+LOADLIBES= -ldl $(RDYNAMIC)
diff --git a/gdb/configure.host b/gdb/configure.host
index 7dc35e1..c5a7a3e 100644
--- a/gdb/configure.host
+++ b/gdb/configure.host
@@ -39,6 +39,7 @@ esac
 
 case "${host_cpu}" in
 
+aarch64*)		gdb_host_cpu=aarch64 ;;
 alpha*)			gdb_host_cpu=alpha ;;
 arm*)			gdb_host_cpu=arm ;;
 hppa*)			gdb_host_cpu=pa ;;
@@ -64,6 +65,8 @@ case "${host}" in
 
 *-*-darwin*)		gdb_host=darwin ;;
 
+aarch64*-*-linux*)	gdb_host=linux ;;
+
 alpha*-*-osf[3456789]*)	gdb_host=alpha-osf3 ;;
 alpha*-*-linux*)	gdb_host=alpha-linux ;;
 alpha*-*-freebsd* | alpha*-*-kfreebsd*-gnu)
diff --git a/gdb/configure.tgt b/gdb/configure.tgt
index 36d4304..63fd4b0 100644
--- a/gdb/configure.tgt
+++ b/gdb/configure.tgt
@@ -31,6 +31,18 @@ esac
 # map target info into gdb names.
 
 case "${targ}" in
+aarch64*-*-elf)
+	# Target: AArch64 embedded system
+	gdb_target_obs="aarch64-tdep.o aarch64-newlib-tdep.o"
+	;;
+
+aarch64*-*-linux*)
+	# Target: AArch64 linux
+	gdb_target_obs="aarch64-tdep.o aarch64-linux-tdep.o \
+			glibc-tdep.o linux-tdep.o solib-svr4.o \
+			symfile-mem.o"
+	build_gdbserver=yes
+	;;
 
 alpha*-*-osf*)
 	# Target: Little-endian Alpha running OSF/1
diff --git a/gdb/defs.h b/gdb/defs.h
index de34740..2764399 100644
--- a/gdb/defs.h
+++ b/gdb/defs.h
@@ -592,6 +592,7 @@ enum gdb_osabi
   GDB_OSABI_DARWIN,
   GDB_OSABI_SYMBIAN,
   GDB_OSABI_OPENVMS,
+  GDB_OSABI_NEWLIB,
 
   GDB_OSABI_INVALID		/* keep this last */
 };
diff --git a/gdb/features/Makefile b/gdb/features/Makefile
index 79803a5..6f2728e 100644
--- a/gdb/features/Makefile
+++ b/gdb/features/Makefile
@@ -30,7 +30,8 @@
 # in the GDB repository.  To generate C files:
 #   make GDB=/path/to/gdb XMLTOC="xml files" cfiles
 
-WHICH = arm-with-iwmmxt arm-with-vfpv2 arm-with-vfpv3 arm-with-neon \
+WHICH = aarch64 aarch64-without-fpu \
+	arm-with-iwmmxt arm-with-vfpv2 arm-with-vfpv3 arm-with-neon \
 	arm-with-m arm-with-m-fpa-layout arm-with-m-vfp-d16 \
 	i386/i386 i386/i386-linux \
 	i386/i386-mmx i386/i386-mmx-linux \
@@ -52,6 +53,7 @@ WHICH = arm-with-iwmmxt arm-with-vfpv2 arm-with-vfpv3 arm-with-neon \
 	tic6x-c64xp-linux tic6x-c64x-linux tic6x-c62x-linux
 
 # Record which registers should be sent to GDB by default after stop.
+aarch64-expedite = x29,sp,pc
 arm-expedite = r11,sp,pc
 i386/i386-expedite = ebp,esp,eip
 i386/i386-linux-expedite = ebp,esp,eip
diff --git a/gdb/features/aarch64-core.xml b/gdb/features/aarch64-core.xml
new file mode 100644
index 0000000..e1e9dc3
--- /dev/null
+++ b/gdb/features/aarch64-core.xml
@@ -0,0 +1,46 @@
+<?xml version="1.0"?>
+<!-- Copyright (C) 2009-2012 Free Software Foundation, Inc.
+     Contributed by ARM Ltd.
+
+     Copying and distribution of this file, with or without modification,
+     are permitted in any medium without royalty provided the copyright
+     notice and this notice are preserved.  -->
+
+<!DOCTYPE feature SYSTEM "gdb-target.dtd">
+<feature name="org.gnu.gdb.aarch64.core">
+  <reg name="x0" bitsize="64"/>
+  <reg name="x1" bitsize="64"/>
+  <reg name="x2" bitsize="64"/>
+  <reg name="x3" bitsize="64"/>
+  <reg name="x4" bitsize="64"/>
+  <reg name="x5" bitsize="64"/>
+  <reg name="x6" bitsize="64"/>
+  <reg name="x7" bitsize="64"/>
+  <reg name="x8" bitsize="64"/>
+  <reg name="x9" bitsize="64"/>
+  <reg name="x10" bitsize="64"/>
+  <reg name="x11" bitsize="64"/>
+  <reg name="x12" bitsize="64"/>
+  <reg name="x13" bitsize="64"/>
+  <reg name="x14" bitsize="64"/>
+  <reg name="x15" bitsize="64"/>
+  <reg name="x16" bitsize="64"/>
+  <reg name="x17" bitsize="64"/>
+  <reg name="x18" bitsize="64"/>
+  <reg name="x19" bitsize="64"/>
+  <reg name="x20" bitsize="64"/>
+  <reg name="x21" bitsize="64"/>
+  <reg name="x22" bitsize="64"/>
+  <reg name="x23" bitsize="64"/>
+  <reg name="x24" bitsize="64"/>
+  <reg name="x25" bitsize="64"/>
+  <reg name="x26" bitsize="64"/>
+  <reg name="x27" bitsize="64"/>
+  <reg name="x28" bitsize="64"/>
+  <reg name="x29" bitsize="64"/>
+  <reg name="x30" bitsize="64"/>
+  <reg name="sp" bitsize="64" type="data_ptr"/>
+
+  <reg name="pc" bitsize="64" type="code_ptr"/>
+  <reg name="cpsr" bitsize="32"/>
+</feature>
diff --git a/gdb/features/aarch64-fpu.xml b/gdb/features/aarch64-fpu.xml
new file mode 100644
index 0000000..997197e
--- /dev/null
+++ b/gdb/features/aarch64-fpu.xml
@@ -0,0 +1,86 @@
+<?xml version="1.0"?>
+<!-- Copyright (C) 2009-2012 Free Software Foundation, Inc.
+     Contributed by ARM Ltd.
+
+     Copying and distribution of this file, with or without modification,
+     are permitted in any medium without royalty provided the copyright
+     notice and this notice are preserved.  -->
+
+<!DOCTYPE feature SYSTEM "gdb-target.dtd">
+<feature name="org.gnu.gdb.aarch64.fpu">
+  <vector id="v2d" type="ieee_double" count="2"/>
+  <vector id="v2u" type="uint64" count="2"/>
+  <vector id="v2i" type="int64" count="2"/>
+  <vector id="v4f" type="ieee_single" count="4"/>
+  <vector id="v4u" type="uint32" count="4"/>
+  <vector id="v4i" type="int32" count="4"/>
+  <vector id="v8u" type="uint16" count="8"/>
+  <vector id="v8i" type="int16" count="8"/>
+  <vector id="v16u" type="uint8" count="16"/>
+  <vector id="v16i" type="int8" count="16"/>
+  <vector id="v1u" type="uint128" count="1"/>
+  <vector id="v1i" type="int128" count="1"/>
+  <union id="vnd">
+    <field name="f" type="v2d"/>
+    <field name="u" type="v2u"/>
+    <field name="s" type="v2i"/>
+  </union>
+  <union id="vns">
+    <field name="f" type="v4f"/>
+    <field name="u" type="v4u"/>
+    <field name="s" type="v4i"/>
+  </union>
+  <union id="vnh">
+    <field name="u" type="v8u"/>
+    <field name="s" type="v8i"/>
+  </union>
+  <union id="vnb">
+    <field name="u" type="v16u"/>
+    <field name="s" type="v16i"/>
+  </union>
+  <union id="vnq">
+    <field name="u" type="v1u"/>
+    <field name="s" type="v1i"/>
+  </union>
+  <union id="aarch64v">
+    <field name="d" type="vnd"/>
+    <field name="s" type="vns"/>
+    <field name="h" type="vnh"/>
+    <field name="b" type="vnb"/>
+    <field name="q" type="vnq"/>
+  </union>
+  <reg name="v0" bitsize="128" type="aarch64v" regnum="34"/>
+  <reg name="v1" bitsize="128" type="aarch64v" />
+  <reg name="v2" bitsize="128" type="aarch64v" />
+  <reg name="v3" bitsize="128" type="aarch64v" />
+  <reg name="v4" bitsize="128" type="aarch64v" />
+  <reg name="v5" bitsize="128" type="aarch64v" />
+  <reg name="v6" bitsize="128" type="aarch64v" />
+  <reg name="v7" bitsize="128" type="aarch64v" />
+  <reg name="v8" bitsize="128" type="aarch64v" />
+  <reg name="v9" bitsize="128" type="aarch64v" />
+  <reg name="v10" bitsize="128" type="aarch64v"/>
+  <reg name="v11" bitsize="128" type="aarch64v"/>
+  <reg name="v12" bitsize="128" type="aarch64v"/>
+  <reg name="v13" bitsize="128" type="aarch64v"/>
+  <reg name="v14" bitsize="128" type="aarch64v"/>
+  <reg name="v15" bitsize="128" type="aarch64v"/>
+  <reg name="v16" bitsize="128" type="aarch64v"/>
+  <reg name="v17" bitsize="128" type="aarch64v"/>
+  <reg name="v18" bitsize="128" type="aarch64v"/>
+  <reg name="v19" bitsize="128" type="aarch64v"/>
+  <reg name="v20" bitsize="128" type="aarch64v"/>
+  <reg name="v21" bitsize="128" type="aarch64v"/>
+  <reg name="v22" bitsize="128" type="aarch64v"/>
+  <reg name="v23" bitsize="128" type="aarch64v"/>
+  <reg name="v24" bitsize="128" type="aarch64v"/>
+  <reg name="v25" bitsize="128" type="aarch64v"/>
+  <reg name="v26" bitsize="128" type="aarch64v"/>
+  <reg name="v27" bitsize="128" type="aarch64v"/>
+  <reg name="v28" bitsize="128" type="aarch64v"/>
+  <reg name="v29" bitsize="128" type="aarch64v"/>
+  <reg name="v30" bitsize="128" type="aarch64v"/>
+  <reg name="v31" bitsize="128" type="aarch64v"/>
+  <reg name="fpsr" bitsize="32"/>
+  <reg name="fpcr" bitsize="32"/>
+</feature>
diff --git a/gdb/features/aarch64-without-fpu.xml b/gdb/features/aarch64-without-fpu.xml
new file mode 100644
index 0000000..663741f
--- /dev/null
+++ b/gdb/features/aarch64-without-fpu.xml
@@ -0,0 +1,13 @@
+<?xml version="1.0"?>
+<!-- Copyright (C) 2009-2012 Free Software Foundation, Inc.
+     Contributed by ARM Ltd.
+
+     Copying and distribution of this file, with or without modification,
+     are permitted in any medium without royalty provided the copyright
+     notice and this notice are preserved.  -->
+
+<!DOCTYPE target SYSTEM "gdb-target.dtd">
+<target>
+  <architecture>aarch64</architecture>
+  <xi:include href="aarch64-core.xml"/>
+</target>
diff --git a/gdb/features/aarch64.xml b/gdb/features/aarch64.xml
new file mode 100644
index 0000000..f7ca62a
--- /dev/null
+++ b/gdb/features/aarch64.xml
@@ -0,0 +1,14 @@
+<?xml version="1.0"?>
+<!-- Copyright (C) 2009-2012 Free Software Foundation, Inc.
+     Contributed by ARM Ltd.
+
+     Copying and distribution of this file, with or without modification,
+     are permitted in any medium without royalty provided the copyright
+     notice and this notice are preserved.  -->
+
+<!DOCTYPE target SYSTEM "gdb-target.dtd">
+<target>
+  <architecture>aarch64</architecture>
+  <xi:include href="aarch64-core.xml"/>
+  <xi:include href="aarch64-fpu.xml"/>
+</target>
diff --git a/gdb/gdbserver/Makefile.in b/gdb/gdbserver/Makefile.in
index f62799e..d2c63c9 100644
--- a/gdb/gdbserver/Makefile.in
+++ b/gdb/gdbserver/Makefile.in
@@ -305,6 +305,7 @@ clean:
 	rm -f version.c
 	rm -f gdbserver$(EXEEXT) gdbreplay$(EXEEXT) core make.log
 	rm -f $(IPA_LIB)
+	rm -f aarch64.c aarch64-without-fpu.c
 	rm -f reg-arm.c reg-bfin.c i386.c reg-ia64.c reg-m32r.c reg-m68k.c
 	rm -f reg-sh.c reg-sparc.c reg-spu.c amd64.c i386-linux.c
 	rm -f reg-cris.c reg-crisv32.c amd64-linux.c reg-xtensa.c
@@ -580,6 +581,12 @@ win32-i386-low.o: win32-i386-low.c $(win32_low_h) $(server_h) $(i386_low_h)
 
 spu-low.o: spu-low.c $(server_h)
 
+aarch64.o : aarch64.c $(regdef_h)
+aarch64.c : $(srcdir)/../regformats/aarch64.dat $(regdat_sh)
+	$(SHELL) $(regdat_sh) $(srcdir)/../regformats/aarch64.dat aarch64.c
+aarch64-without-fpu.o : aarch64-without-fpu.c $(regdef_h)
+aarch64-without-fpu.c : $(srcdir)/../regformats/aarch64-without-fpu.dat $(regdat_sh)
+	$(SHELL) $(regdat_sh) $(srcdir)/../regformats/aarch64-without-fpu.dat aarch64-without-fpu.c
 reg-arm.o : reg-arm.c $(regdef_h)
 reg-arm.c : $(srcdir)/../regformats/reg-arm.dat $(regdat_sh)
 	$(SHELL) $(regdat_sh) $(srcdir)/../regformats/reg-arm.dat reg-arm.c
diff --git a/gdb/gdbserver/configure.srv b/gdb/gdbserver/configure.srv
index d1e04a9..54c4a02 100644
--- a/gdb/gdbserver/configure.srv
+++ b/gdb/gdbserver/configure.srv
@@ -42,6 +42,21 @@ srv_amd64_linux_xmlfiles="i386/amd64-linux.xml i386/amd64-avx-linux.xml i386/64b
 # Input is taken from the "${target}" variable.
 
 case "${target}" in
+  aarch64*-*-linux*)
+			srv_regobj="aarch64.o aarch64-without-fpu.o"
+			srv_tgtobj="linux-aarch64-low.o"
+			srv_tgtobj="${srv_tgtobj} linux-low.o"
+			srv_tgtobj="${srv_tgtobj} linux-osdata.o"
+			srv_tgtobj="${srv_tgtobj} linux-procfs.o"
+			srv_tgtobj="${srv_tgtobj} linux-ptrace.o"
+			srv_xmlfiles="aarch64.xml"
+			srv_xmlfiles="${srv_xmlfiles} aarch64-core.xml"
+			srv_xmlfiles="${srv_xmlfiles} aarch64-fpu.xml"
+			srv_xmlfiles="${srv_xmlfiles} aarch64-without-fpu.xml"
+			srv_linux_usrregs=yes
+			srv_linux_regsets=yes
+			srv_linux_thread_db=yes
+			;;
   arm*-*-linux*)	srv_regobj="reg-arm.o arm-with-iwmmxt.o"
 			srv_regobj="${srv_regobj} arm-with-vfpv2.o"
 			srv_regobj="${srv_regobj} arm-with-vfpv3.o"
diff --git a/gdb/gdbserver/linux-aarch64-low.c b/gdb/gdbserver/linux-aarch64-low.c
new file mode 100644
index 0000000..3369f21
--- /dev/null
+++ b/gdb/gdbserver/linux-aarch64-low.c
@@ -0,0 +1,1315 @@
+/* GNU/Linux/AArch64 specific low level interface, for the remote server for
+   GDB.
+
+   Copyright (C) 2009-2012 Free Software Foundation, Inc.
+   Contributed by ARM Ltd.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "server.h"
+#include "linux-low.h"
+
+#include <elf.h>
+#include <signal.h>
+#include <sys/user.h>
+#include <sys/ptrace.h>
+
+#include "gdb_proc_service.h"
+
+/* Defined in auto-generated files.  */
+void init_registers_aarch64 (void);
+
+/* Defined in auto-generated files.  */
+void init_registers_aarch64_without_fpu (void);
+
+#ifndef PTRACE_GET_THREAD_AREA
+#define PTRACE_GET_THREAD_AREA 22
+#endif
+
+#ifndef PTRACE_GETHBPREGS
+#define PTRACE_GETHBPREGS 29
+#endif
+
+#ifndef PTRACE_SETHBPREGS
+#define PTRACE_SETHBPREGS 30
+#endif
+
+#ifdef HAVE_SYS_REG_H
+#include <sys/reg.h>
+#endif
+
+#define AARCH64_X_REGS_NUM 31
+#define AARCH64_V_REGS_NUM 32
+#define AARCH64_X0_REGNO    0
+#define AARCH64_SP_REGNO   31
+#define AARCH64_PC_REGNO   32
+#define AARCH64_CPSR_REGNO 33
+#define AARCH64_V0_REGNO   34
+
+#define AARCH64_NUM_REGS (AARCH64_V0_REGNO + AARCH64_V_REGS_NUM)
+
+#ifndef TRAP_HWBKPT
+#define TRAP_HWBKPT 0x0004
+#endif
+
+static int
+aarch64_regmap [] =
+{
+  /* These offsets correspond to GET/SETREGSET */
+  /* x0...  */
+   0*8,  1*8,  2*8,  3*8,  4*8,  5*8,  6*8,  7*8,
+   8*8,  9*8, 10*8, 11*8, 12*8, 13*8, 14*8, 15*8,
+  16*8, 17*8, 18*8, 19*8, 20*8, 21*8, 22*8, 23*8,
+  24*8, 25*8, 26*8, 27*8, 28*8,
+  29*8,
+  30*8,				/* x30 lr */
+  31*8,				/* x31 sp */
+  32*8,				/*     pc */
+  33*8,				/*     cpsr    4 bytes!*/
+
+  /* FP register offsets correspond to GET/SETFPREGSET */
+   0*16,  1*16,  2*16,  3*16,  4*16,  5*16,  6*16,  7*16,
+   8*16,  9*16, 10*16, 11*16, 12*16, 13*16, 14*16, 15*16,
+  16*16, 17*16, 18*16, 19*16, 20*16, 21*16, 22*16, 23*16,
+  24*16, 25*16, 26*16, 27*16, 28*16, 29*16, 30*16, 31*16
+};
+
+/* Here starts the macro definitions, data structures, and code for the
+   hardware breakpoint and hardware watchpoint support.  The following
+   is the abbreviations that are used frequently in the code and comment:
+
+   hw - hardware
+   bp - breakpoint
+   wp - watchpoint  */
+
+/* Maximum number of hardware breakpoints/watchpoints.
+   N.B.  when change, especially increase, the numbers, also make sure
+   the type dr_changed_t still be wide enough, i.e. the number of its
+   bits is equal to or larger than the maximum value of the two macros.  */
+
+#define AARCH64_HBP_MAX_NUM 16
+#define AARCH64_HWP_MAX_NUM 16
+
+/* Alignment requirement in bytes of hardware breakpoint and watchpoint
+   address.  This is the requirement for the addresses that can be written
+   to the hardware breakpoint/watchpoint value registers.  The kernel
+   currently does not do any alignment on addresses when receiving a
+   writing request (via ptrace call) to these debug registers, and it will
+   reject any address that is unaligned.
+   Some limited support has been provided in this gdbserver port for
+   unaligned watchpoints, so that from a gdb user point of view, an
+   unaligned watchpoint can still be set.  This is achieved by minimally
+   enlarging the watched area to meet the alignment requirement, and if
+   necessary, splitting the watchpoint over several hardware watchpoint
+   registers.  */
+
+#define AARCH64_HBP_ALIGNMENT 4
+#define AARCH64_HWP_ALIGNMENT 8
+
+/* The maximum length of a memory region that can be watched by one hardware
+   watchpoint register.  */
+
+#define AARCH64_HWP_MAX_LEN_PER_REG 8
+
+/* Each bit of a variable of this type is used to indicate whether a
+   hardware breakpoint or watchpoint setting has been changed since the
+   last updating.  Bit N corresponds to the Nth hardware breakpoint or
+   watchpoint setting which is managed in aarch64_debug_reg_state.  Where
+   N is valid between 0 and the total number of the hardware breakpoint or
+   watchpoint debug registers minus 1.  When the bit N is 1, it indicates
+   the corresponding breakpoint or watchpoint setting is changed, and thus
+   the corresponding hardware debug register needs to be updated via the
+   ptrace interface.
+
+   In the per-thread arch-specific data area, we define two such variables
+   for per-thread hardware breakpoint and watchpoint settings respectively.
+
+   This type is part of the mechanism which helps reduce the number of
+   ptrace calls to the kernel, i.e. avoid asking the kernel to write to
+   the debug registers with unchanged values.  */
+
+typedef unsigned long long dr_changed_t;
+
+/* Set each of the lower M bits of X to 1; assert X is wide enough.  */
+
+#define DR_MARK_ALL_CHANGED(x, m)					\
+  do									\
+    {									\
+      gdb_assert (sizeof ((x)) * 8 >= (m));				\
+      (x) = (((dr_changed_t)1 << (m)) - 1);				\
+    } while (0)
+
+#define DR_MARK_N_CHANGED(x, n)						\
+  do									\
+    {									\
+      (x) |= ((dr_changed_t)1 << (n));					\
+    } while (0)
+
+#define DR_CLEAR_CHANGED(x)						\
+  do									\
+    {									\
+      (x) = 0;								\
+    } while (0)
+
+#define DR_HAS_CHANGED(x) ((x) != 0)
+#define DR_N_HAS_CHANGED(x, n) ((x) & ((dr_changed_t)1 << (n)))
+
+/* Structure for managing the hardware breakpoint/watchpoint resources.
+   DR_ADDR_* stores the address, DR_CTRL_* stores the control register
+   content, and DR_REF_COUNT_* counts the numbers of references to the
+   corresponding bp/wp, by which way the limited hardware resources
+   are not wasted on duplicated bp/wp settings (though so far gdb has
+   done a good job by not sending duplicated bp/wp requests).  */
+
+struct aarch64_debug_reg_state
+{
+  /* hardware breakpoint */
+  CORE_ADDR dr_addr_bp[AARCH64_HBP_MAX_NUM];
+  unsigned int dr_ctrl_bp[AARCH64_HBP_MAX_NUM];
+  unsigned int dr_ref_count_bp[AARCH64_HBP_MAX_NUM];
+
+  /* hardware watchpoint */
+  CORE_ADDR dr_addr_wp[AARCH64_HWP_MAX_NUM];
+  unsigned int dr_ctrl_wp[AARCH64_HWP_MAX_NUM];
+  unsigned int dr_ref_count_wp[AARCH64_HWP_MAX_NUM];
+};
+
+/* Per-process arch-specific data we want to keep.  */
+
+struct arch_process_info
+{
+  /* Hardware breakpoint/watchpoint data.
+     The reason for them to be per-process rather than per-thread is due
+     to the lack of information in the gdbserver environment; gdbserver
+     is not told that whether a requested hardware breakpoint/watchpoint
+     is thread specific or not, so it has to set each hw bp/wp for every
+     thread in the current process.  The higher level bp/wp management
+     in gdb will resume a thread if a hw bp/wp trap is not expected for
+     it.  Since the hw bp/wp setting is same for each thread, it is
+     reasonable for the data to live here.  */
+  struct aarch64_debug_reg_state debug_reg_state;
+};
+
+/* Per-thread arch-specific data we want to keep.  */
+
+struct arch_lwp_info
+{
+  /* When bit N is 1, it indicates the Nth hardware breakpoint or watchpoint
+     register pair needs to be updated when the thread is resumed; see
+     aarch64_linux_prepare_to_resume.  */
+  dr_changed_t dr_changed_bp;
+  dr_changed_t dr_changed_wp;
+};
+
+/* Number of hardware breakpoints/watchpoints the target supports.
+   They are initialized with values from hardware breakpoint resource info
+   register via ptrace call PTRACE_GETHBPREGS with reg index 0.  */
+
+static int aarch64_num_bp_regs;
+static int aarch64_num_wp_regs;
+
+/* Hardware breakpoint/watchpoint types.
+   The values map to their encodings in the bit 4 and bit 3 of the
+   hardware breakpoint/watchpoint control registers.  */
+
+enum target_point_type
+{
+  hw_execute = 0,		/* Execute HW breakpoint */
+  hw_read = 1,			/* Read    HW watchpoint */
+  hw_write = 2,			/* Common  HW watchpoint */
+  hw_access = 3,		/* Access  HW watchpoint */
+  point_type_unsupported
+};
+
+#define Z_PACKET_SW_BP '0'
+#define Z_PACKET_HW_BP '1'
+#define Z_PACKET_WRITE_WP '2'
+#define Z_PACKET_READ_WP '3'
+#define Z_PACKET_ACCESS_WP '4'
+
+/* Map the protocol breakpoint/watchpoint type TYPE to
+   enum target_point_type.  */
+
+static enum target_point_type
+Z_packet_to_point_type (char type)
+{
+  switch (type)
+    {
+    case Z_PACKET_SW_BP:
+      /* Leave the handling of the sw breakpoint with the gdb client.  */
+      return point_type_unsupported;
+    case Z_PACKET_HW_BP:
+      return hw_execute;
+    case Z_PACKET_WRITE_WP:
+      return hw_write;
+    case Z_PACKET_READ_WP:
+      return hw_read;
+    case Z_PACKET_ACCESS_WP:
+      return hw_access;
+    default:
+      return point_type_unsupported;
+    }
+}
+
+static int
+aarch64_cannot_store_register (int regno)
+{
+  return (regno >= AARCH64_NUM_REGS);
+}
+
+static int
+aarch64_cannot_fetch_register (int regno)
+{
+  return (regno >= AARCH64_NUM_REGS);
+}
+
+static void
+aarch64_fill_gregset (struct regcache *regcache, void *buf)
+{
+  struct user_pt_regs *regset = buf;
+  int i;
+
+  for (i = 0; i < AARCH64_X_REGS_NUM; i++)
+    collect_register (regcache, AARCH64_X0_REGNO + i, &regset->regs[i]);
+  collect_register (regcache, AARCH64_SP_REGNO, &regset->sp);
+  collect_register (regcache, AARCH64_PC_REGNO, &regset->pc);
+  collect_register (regcache, AARCH64_CPSR_REGNO, &regset->pstate);
+}
+
+static void
+aarch64_store_gregset (struct regcache *regcache, const void *buf)
+{
+  const struct user_pt_regs *regset = buf;
+  int i;
+
+  for (i = 0; i < AARCH64_X_REGS_NUM; i++)
+    supply_register (regcache, AARCH64_X0_REGNO + i, &regset->regs[i]);
+  supply_register (regcache, AARCH64_SP_REGNO, &regset->sp);
+  supply_register (regcache, AARCH64_PC_REGNO, &regset->pc);
+  supply_register (regcache, AARCH64_CPSR_REGNO, &regset->pstate);
+}
+
+static void
+aarch64_fill_fpregset (struct regcache *regcache, void *buf)
+{
+  struct user_fpsimd_state *regset = buf;
+  int i;
+
+  for (i = 0; i < AARCH64_V_REGS_NUM; i++)
+    collect_register (regcache, AARCH64_V0_REGNO + i, &regset->vregs[i]);
+}
+
+static void
+aarch64_store_fpregset (struct regcache *regcache, const void *buf)
+{
+  const struct user_fpsimd_state *regset = buf;
+  int i;
+
+  for (i = 0; i < AARCH64_V_REGS_NUM; i++)
+    supply_register (regcache, AARCH64_V0_REGNO + i, &regset->vregs[i]);
+}
+
+/* Debugging of hardware breakpoint/watchpoint support.  */
+extern int debug_hw_points;
+
+/* Enable miscellaneous debugging output.  The name is historical - it
+   was originally used to debug LinuxThreads support.  */
+extern int debug_threads;
+
+static CORE_ADDR
+aarch64_get_pc (struct regcache *regcache)
+{
+  unsigned long pc;
+  collect_register_by_name (regcache, "pc", &pc);
+  if (debug_threads)
+    fprintf (stderr, "stop pc is %08lx\n", pc);
+  return pc;
+}
+
+static void
+aarch64_set_pc (struct regcache *regcache, CORE_ADDR pc)
+{
+  unsigned long newpc = pc;
+  supply_register_by_name (regcache, "pc", &newpc);
+}
+
+/* Correct in either endianness.  */
+
+#define aarch64_breakpoint_len 4
+
+static const unsigned long aarch64_breakpoint = 0x00800011;
+
+static int
+aarch64_breakpoint_at (CORE_ADDR where)
+{
+  unsigned long insn;
+
+  (*the_target->read_memory) (where, (unsigned char *) &insn, 4);
+  if (insn == aarch64_breakpoint)
+    return 1;
+
+  return 0;
+}
+
+/* Print the values of the cached breakpoint/watchpoint registers.
+   This is enabled via the "set debug-hw-points" monitor command.  */
+
+static void
+aarch64_show_debug_reg_state (struct aarch64_debug_reg_state *state,
+			      const char *func, CORE_ADDR addr,
+			      int len, enum target_point_type type)
+{
+  int i;
+
+  fprintf (stderr, "%s", func);
+  if (addr || len)
+    fprintf (stderr, " (addr=0x%08lx, len=%d, type=%s)",
+	     (unsigned long) addr, len,
+	     type == hw_write ? "hw-write-watchpoint"
+	     : (type == hw_read ? "hw-read-watchpoint"
+		: (type == hw_access ? "hw-access-watchpoint"
+		   : (type == hw_execute ? "hw-breakpoint"
+		      : "??unknown??"))));
+  fprintf (stderr, ":\n");
+
+  fprintf (stderr, "\tBREAKPOINTs:\n");
+  for (i = 0; i < aarch64_num_bp_regs; i++)
+    fprintf (stderr, "\tBP%d: addr=0x%s, ctrl=0x%08x, ref.count=%d\n",
+	     i, paddress (state->dr_addr_bp[i]),
+	     state->dr_ctrl_bp[i], state->dr_ref_count_bp[i]);
+
+  fprintf (stderr, "\tWATCHPOINTs:\n");
+  for (i = 0; i < aarch64_num_wp_regs; i++)
+    fprintf (stderr, "\tWP%d: addr=0x%s, ctrl=0x%08x, ref.count=%d\n",
+	     i, paddress (state->dr_addr_wp[i]),
+	     state->dr_ctrl_wp[i], state->dr_ref_count_wp[i]);
+}
+
+static void
+aarch64_init_debug_reg_state (struct aarch64_debug_reg_state *state)
+{
+  int i;
+
+  for (i = 0; i < AARCH64_HBP_MAX_NUM; ++i)
+    {
+      state->dr_addr_bp[i] = 0;
+      state->dr_ctrl_bp[i] = 0;
+      state->dr_ref_count_bp[i] = 0;
+    }
+
+  for (i = 0; i < AARCH64_HWP_MAX_NUM; ++i)
+    {
+      state->dr_addr_wp[i] = 0;
+      state->dr_ctrl_wp[i] = 0;
+      state->dr_ref_count_wp[i] = 0;
+    }
+}
+
+/* The following two utility routines map the index of a breakpoint/
+   watchpoint address/control register in aarch64_debug_reg_state to
+   the index that can be used in the ptrace call to access the
+   access the corresponding real hardware register.
+
+   In Linux, breakpoints are identified using positive numbers whilst
+   watchpoints are negative.  The registers are laid out as pairs of
+   (address, control).  Index 0 is reserved for describing resource
+   information.  */
+
+static unsigned long
+dr_idx_to_ptrace_addr_reg_idx (int is_watchpoint, int idx)
+{
+  return is_watchpoint ? -((idx << 1) + 1) : (idx << 1) + 1;
+}
+
+static unsigned long
+dr_idx_to_ptrace_ctrl_reg_idx (int is_watchpoint, int idx)
+{
+  return is_watchpoint ? -((idx << 1) + 2) : (idx << 1) + 2;
+}
+
+/* ptrace expects control registers to be formatted as follows:
+
+   31                             13          5      3      1     0
+   +--------------------------------+----------+------+------+----+
+   |         RESERVED (SBZ)         |  LENGTH  | TYPE | PRIV | EN |
+   +--------------------------------+----------+------+------+----+
+
+   The TYPE field is ignored for breakpoints.  */
+
+#define DR_CONTROL_ENABLED(ctrl)	(((ctrl) & 0x1) == 1)
+#define DR_CONTROL_LENGTH(ctrl)		(((ctrl) >> 5) & 0xff)
+
+/* Given the hardware breakpoint or watchpoint type TYPE and its length LEN,
+   return the expected encoding for a hardware breakpoint/watchpoint control
+   register.  */
+
+static unsigned int
+aarch64_point_encode_ctrl_reg (enum target_point_type type, int len)
+{
+  unsigned int ctrl;
+
+  /* type */
+  ctrl = type << 3;
+  /* length bitmask */
+  ctrl |= ((1 << len) - 1) << 5;
+  /* enabled at el0 */
+  ctrl |= (2 << 1) | 1;
+
+  return ctrl;
+}
+
+/* Addresses to be written to the hardware breakpoint and watchpoint value
+   registers need to be aligned; the alignment is 4-byte and 8-type
+   respectively.  Linux kernel rejects any non-aligned address it receives
+   from the related ptrace call.  Furthermore, the kernel currently only
+   supports the following Byte Address Select (BAS) values: 0x1, 0x3, 0xf and
+   0xff, which means that for a hardware watchpoint to be accepted by the
+   kernel (via ptrace call), its valid length can only be 1 byte, 2 bytes, 4
+   bytes or 8 bytes.  Despite these limitations, the unaligned watchpoint is
+   supported in this gdbserver port.
+
+   Return 0 for any non-compliant ADDR and/or LEN; return 1 otherwise.  */
+
+static int
+aarch64_point_is_aligned (int is_watchpoint, CORE_ADDR addr, int len)
+{
+  unsigned int alignment = is_watchpoint ? AARCH64_HWP_ALIGNMENT
+    : AARCH64_HBP_ALIGNMENT;
+
+  if (addr & (alignment - 1))
+    {
+      return 0;
+    }
+
+  if (len != 8 && len != 4 && len != 2 && len != 1)
+    {
+      return 0;
+    }
+
+  return 1;
+}
+
+/* Given the (potentially unaligned) watchpoint address in ADDR and length
+   in LEN, return the aligned address and aligned length in *ALIGNED_ADDR_P
+   and *ALIGNED_LEN_P, respectively.  The returned aligned address and length
+   will be valid to be written to the hardware watchpoint value and control
+   registers.  See the comment above aarch64_point_is_aligned for the
+   information about the alignment requirement.  The given watchpoint may get
+   truncated if more than one hardware registers are needed to cover the
+   watched region.  *NEXT_ADDR_P and *NEXT_LEN_P, if non-NULL, will return
+   the address and length of the remaining part of the watchpoint (which can
+   be processed by calling this routine again to generate another pair of
+   aligned address and length).
+
+   Essentially, unaligned watchpoint is achieved by minimally enlarging the
+   watched area to meet the alignment requirement, and if necessary,
+   splitting the watchpoint over several hardware watchpoint registers.  The
+   trade-off is that there will be false-positive hits for the read-type or
+   the access-type hardware watchpoints; for the write type, which is more
+   commonly used, there will be no such issues, as the higher-level
+   breakpoint management in gdb always examines the exact watched region for
+   any content change, and transparently resumes a thread from a watchpoint
+   trap if there is no change to the watched region.
+
+   Another limitation is that because the watched region is enlarged, the
+   watchpoint fault address returned by aarch64_stopped_data_address may be
+   outside of the original watched region, especially when the triggering
+   instruction is accessing a larger region.  When the fault address is not
+   within any known range, watchpoints_triggered in gdb will get confused,
+   as the higher-level watchpoint management is only aware of original
+   watched regions, and will think that some unknown watchpoint has been
+   triggered.  In such a case, gdb may stop without displaying any detailed
+   information.
+
+   Once the kernel provides the full support for Byte Address Select (BAS)
+   in the hardware watchpoint control register, these limitations can be
+   largely relaxed with some further work.  */
+
+static void
+aarch64_align_watchpoint (CORE_ADDR addr, int len, CORE_ADDR * aligned_addr_p,
+			  int *aligned_len_p, CORE_ADDR * next_addr_p,
+			  int *next_len_p)
+{
+  int aligned_len;
+  unsigned int offset;
+  CORE_ADDR aligned_addr;
+  const unsigned int alignment = AARCH64_HWP_ALIGNMENT;
+  const unsigned int max_wp_len = AARCH64_HWP_MAX_LEN_PER_REG;
+
+  /* As assumed by the algorithm.  */
+  gdb_assert (alignment == max_wp_len);
+
+  if (len <= 0)
+    return;
+
+  /* Address to be put into the hardware watchpoint value register must be
+     aligned.  */
+  offset = addr & (alignment - 1);
+  aligned_addr = addr - offset;
+
+  gdb_assert (offset >= 0 && offset < alignment);
+  gdb_assert (aligned_addr >= 0 && aligned_addr <= addr);
+  gdb_assert ((offset + len) > 0);
+
+  if ((offset + len) >= max_wp_len)
+    {
+      /* Need more than one watchpoint registers; truncate it at the
+         alignment boundary.  */
+      aligned_len = max_wp_len;
+      len -= (max_wp_len - offset);
+      addr += (max_wp_len - offset);
+      gdb_assert ((addr & (alignment - 1)) == 0);
+    }
+  else
+    {
+      /* Find the smallest valid length that is large enough to accommodate
+         this watchpoint.  */
+      static const unsigned char
+	aligned_len_array[AARCH64_HWP_MAX_LEN_PER_REG] =
+	{ 1, 2, 4, 4, 8, 8, 8, 8 };
+
+      aligned_len = aligned_len_array[offset + len - 1];
+      addr += len;
+      len = 0;
+    }
+
+  if (aligned_addr_p)
+    *aligned_addr_p = aligned_addr;
+  if (aligned_len_p)
+    *aligned_len_p = aligned_len;
+  if (next_addr_p)
+    *next_addr_p = addr;
+  if (next_len_p)
+    *next_len_p = len;
+
+  return;
+}
+
+/* Set thread TID's IDXth hardware breakpoint/watchpoint address register
+   to ADDR.  */
+
+static void
+aarch64_linux_set_one_addr_dr (int tid, int is_watchpoint, int idx,
+			       CORE_ADDR addr)
+{
+  unsigned long ptrace_idx;
+
+  ptrace_idx = dr_idx_to_ptrace_addr_reg_idx (is_watchpoint, idx);
+  if (ptrace (PTRACE_SETHBPREGS, tid, ptrace_idx, &addr) != 0)
+    error ("ptrace_sethbpregs addr fails: %llx", addr);
+
+  return;
+}
+
+/* Set thread TID's IDXth hardware breakpoint/watchpoint control register
+   to CTRL.  */
+
+static void
+aarch64_linux_set_one_ctrl_dr (int tid, int is_watchpoint, int idx,
+			       unsigned int ctrl)
+{
+  unsigned long ptrace_idx;
+
+  ptrace_idx = dr_idx_to_ptrace_ctrl_reg_idx (is_watchpoint, idx);
+  if (ptrace (PTRACE_SETHBPREGS, tid, ptrace_idx, &ctrl) != 0)
+    error ("ptrace_sethbpregs ctrl fails: %x", ctrl);
+
+  return;
+}
+
+/* Update the thread PTID's hardware breakpoint/watchpoint register pairs
+   with data from STATE, providing their settings have been changed since
+   the last update, which is indicated by INFO->DR_CHANGED_*.
+
+   To unset a breakpoint/watchpoint, only its control register needs to
+   be updated.  */
+
+static void
+aarch64_linux_update_debug_regs (ptid_t ptid, struct arch_lwp_info *info,
+				 struct aarch64_debug_reg_state *state)
+{
+  int i, tid;
+  dr_changed_t dr_changed;
+
+  tid = ptid_get_lwp (ptid);
+
+  /* watchpoints */
+  dr_changed = info->dr_changed_wp;
+  if (DR_HAS_CHANGED (dr_changed))
+    for (i = 0; i < aarch64_num_wp_regs; ++i)
+      if (DR_N_HAS_CHANGED (dr_changed, i))
+	{
+	  if (DR_CONTROL_ENABLED (state->dr_ctrl_wp[i]))
+	    aarch64_linux_set_one_addr_dr (tid, 1 /* is_watchpoint */ , i,
+					   state->dr_addr_wp[i]);
+	  if (DR_CONTROL_LENGTH (state->dr_ctrl_wp[i]))
+	    /* The non-zero length gives minimum guarantee of valid
+	       content for the ctrl reg.  */
+	    aarch64_linux_set_one_ctrl_dr (tid, 1 /* is_watchpoint */ , i,
+					   state->dr_ctrl_wp[i]);
+	}
+  DR_CLEAR_CHANGED (info->dr_changed_wp);
+
+  /* breakpoints */
+  dr_changed = info->dr_changed_bp;
+  if (DR_HAS_CHANGED (dr_changed))
+    for (i = 0; i < aarch64_num_bp_regs; ++i)
+      if (DR_N_HAS_CHANGED (dr_changed, i))
+	{
+	  if (DR_CONTROL_ENABLED (state->dr_ctrl_bp[i]))
+	    aarch64_linux_set_one_addr_dr (tid, 0 /* is_watchpoint */ , i,
+					   state->dr_addr_bp[i]);
+
+	  if (DR_CONTROL_LENGTH (state->dr_ctrl_bp[i]))
+	    /* The non-zero length gives minimum guarantee of valid
+	       content for the ctrl reg.  */
+	    aarch64_linux_set_one_ctrl_dr (tid, 0 /* is_watchpoint */ , i,
+					   state->dr_ctrl_bp[i]);
+	}
+  DR_CLEAR_CHANGED (info->dr_changed_bp);
+
+  return;
+}
+
+struct aarch64_dr_update_callback_param
+{
+  int pid;
+  int is_watchpoint;
+  unsigned int idx;
+};
+
+/* Callback function which records the information about the change of
+   one hardware breakpoint/watchpoint setting for the thread ENTRY.
+   The information is passed in via PTR.
+   N.B.  The actual updating of hardware debug registers is not carried
+   out until the moment the thread is resumed.  */
+
+static int
+debug_reg_change_callback (struct inferior_list_entry *entry, void *ptr)
+{
+  struct lwp_info *lwp = (struct lwp_info *) entry;
+  struct aarch64_dr_update_callback_param *param_p
+    = (struct aarch64_dr_update_callback_param *) ptr;
+  int pid = param_p->pid;
+  int idx = param_p->idx;
+  int is_watchpoint = param_p->is_watchpoint;
+  struct arch_lwp_info *info = lwp->arch_private;
+  dr_changed_t *dr_changed_ptr;
+  dr_changed_t dr_changed;
+
+  if (debug_hw_points)
+    {
+      fprintf (stderr, "debug_reg_change_callback: \n\tOn entry:\n");
+      fprintf (stderr, "\tpid%d, tid: %ld, dr_changed_bp=0x%llx, "
+	       "dr_changed_wp=0x%llx\n",
+	       pid, lwpid_of (lwp), info->dr_changed_bp,
+	       info->dr_changed_wp);
+    }
+
+  dr_changed_ptr = is_watchpoint ? &info->dr_changed_wp
+    : &info->dr_changed_bp;
+  dr_changed = *dr_changed_ptr;
+
+  /* Only update the threads of this process.  */
+  if (pid_of (lwp) == pid)
+    {
+      gdb_assert (idx >= 0
+		  && (idx <= (is_watchpoint ? aarch64_num_wp_regs
+			      : aarch64_num_bp_regs)));
+
+      /* The following assertion is not right, as there can be changes that
+         have not been made to the hardware debug registers before new
+         changes overwrite the old ones.  This can happen, for instance,
+         when the breakpoint/watchpoint hit one of the threads and the
+         user enters continue; then what happens is:
+         1) all breakpoints/watchpoints are removed for all threads;
+         2) a single step is carried out for the thread that was hit;
+         3) all of the points are inserted again for all threads;
+         4) all threads are resumed.
+         The 2nd step will only affect the one thread in which the bp/wp
+         was hit, which means only that one thread is resumed; remember
+         that the actual updating only happen in
+         aarch64_linux_prepare_to_resume, so other threads remain stopped
+         during the removal and insertion of bp/wp.  Therefore for those
+         threads, the change of insertion of the bp/wp overwrites that
+         of the earlier removals.  (The situation may be different when
+         bp/wp is steppable, or in the non-stop mode.)  */
+      /* gdb_assert (DR_N_HAS_CHANGED (dr_changed, idx) == 0);  */
+
+      /* The actual update is done later just before resuming the lwp,
+         we just mark that one register pair needs updating.  */
+      DR_MARK_N_CHANGED (dr_changed, idx);
+      *dr_changed_ptr = dr_changed;
+
+      /* If the lwp isn't stopped, force it to momentarily pause, so
+         we can update its debug registers.  */
+      if (!lwp->stopped)
+	linux_stop_lwp (lwp);
+    }
+
+  if (debug_hw_points)
+    {
+      fprintf (stderr, "\tOn exit:\n\tpid%d, tid: %ld, dr_changed_bp=0x%llx, "
+	       "dr_changed_wp=0x%llx\n",
+	       pid, lwpid_of (lwp), info->dr_changed_bp, info->dr_changed_wp);
+    }
+
+  return 0;
+}
+
+/* Notify each thread that their IDXth breakpoint/watchpoint register
+   pair needs to be updated.  The message will be recorded in each
+   thread's arch-specific data area, the actual updating will be done
+   when the thread is resumed.  */
+
+void
+aarch64_notify_debug_reg_change (const struct aarch64_debug_reg_state *state,
+				 int is_watchpoint, unsigned int idx)
+{
+  struct aarch64_dr_update_callback_param param;
+
+  /* Only update the threads of this process.  */
+  param.pid = pid_of (get_thread_lwp (current_inferior));
+
+  param.is_watchpoint = is_watchpoint;
+  param.idx = idx;
+
+  find_inferior (&all_lwps, debug_reg_change_callback, (void *) &param);
+}
+
+
+/* Return the pointer to the debug register state structure in the current
+   process' arch-specific data area.  */
+
+static struct aarch64_debug_reg_state *
+aarch64_get_debug_reg_state ()
+{
+  int pid;
+  struct lwp_info *thread;
+  struct process_info *proc;
+
+  thread = get_thread_lwp (current_inferior);
+  pid = pid_of (thread);
+  proc = find_process_pid (pid);
+
+  return &proc->private->arch_private->debug_reg_state;
+}
+
+/* Record the insertion of one breakpoint/watchpoint, as represented by
+   ADDR and CTRL, in the process' arch-specific data area *STATE.  */
+
+static int
+aarch64_dr_state_insert_one_point (struct aarch64_debug_reg_state *state,
+				   enum target_point_type type,
+				   CORE_ADDR addr, int len)
+{
+  int i, idx, num_regs, is_watchpoint;
+  unsigned int ctrl, *dr_ctrl_p, *dr_ref_count;
+  CORE_ADDR *dr_addr_p;
+
+  /* Set up state pointers.  */
+  is_watchpoint = (type != hw_execute);
+  gdb_assert (aarch64_point_is_aligned (is_watchpoint, addr, len));
+  if (is_watchpoint)
+    {
+      num_regs = aarch64_num_wp_regs;
+      dr_addr_p = state->dr_addr_wp;
+      dr_ctrl_p = state->dr_ctrl_wp;
+      dr_ref_count = state->dr_ref_count_wp;
+    }
+  else
+    {
+      num_regs = aarch64_num_bp_regs;
+      dr_addr_p = state->dr_addr_bp;
+      dr_ctrl_p = state->dr_ctrl_bp;
+      dr_ref_count = state->dr_ref_count_bp;
+    }
+
+  ctrl = aarch64_point_encode_ctrl_reg (type, len);
+
+  /* Find an existing or free register in our cache.  */
+  idx = -1;
+  for (i = 0; i < num_regs; ++i)
+    {
+      if ((dr_ctrl_p[i] & 1) == 0)
+	{
+	  gdb_assert (dr_ref_count[i] == 0);
+	  idx = i;
+	  /* no break; continue hunting for an exising one.  */
+	}
+      else if (dr_addr_p[i] == addr && dr_ctrl_p[i] == ctrl)
+	{
+	  gdb_assert (dr_ref_count[i] != 0);
+	  idx = i;
+	  break;
+	}
+    }
+
+  /* No space.  */
+  if (idx == -1)
+    return -1;
+
+  /* Update our cache.  */
+  if ((dr_ctrl_p[idx] & 1) == 0)
+    {
+      /* new entry */
+      dr_addr_p[idx] = addr;
+      dr_ctrl_p[idx] = ctrl;
+      dr_ref_count[idx] = 1;
+      /* Notify the change.  */
+      aarch64_notify_debug_reg_change (state, is_watchpoint, idx);
+    }
+  else
+    {
+      /* existing entry */
+      dr_ref_count[idx]++;
+    }
+
+  return 0;
+}
+
+/* Record the removal of one breakpoint/watchpoint, as represented by
+   ADDR and CTRL, in the process' arch-specific data area *STATE.  */
+
+static int
+aarch64_dr_state_remove_one_point (struct aarch64_debug_reg_state *state,
+				   enum target_point_type type,
+				   CORE_ADDR addr, int len)
+{
+  int i, num_regs, is_watchpoint;
+  unsigned int ctrl, *dr_ctrl_p, *dr_ref_count;
+  CORE_ADDR *dr_addr_p;
+
+  /* Set up state pointers.  */
+  is_watchpoint = (type != hw_execute);
+  gdb_assert (aarch64_point_is_aligned (is_watchpoint, addr, len));
+  if (is_watchpoint)
+    {
+      num_regs = aarch64_num_wp_regs;
+      dr_addr_p = state->dr_addr_wp;
+      dr_ctrl_p = state->dr_ctrl_wp;
+      dr_ref_count = state->dr_ref_count_wp;
+    }
+  else
+    {
+      num_regs = aarch64_num_bp_regs;
+      dr_addr_p = state->dr_addr_bp;
+      dr_ctrl_p = state->dr_ctrl_bp;
+      dr_ref_count = state->dr_ref_count_bp;
+    }
+
+  ctrl = aarch64_point_encode_ctrl_reg (type, len);
+
+  /* Find the entry that matches the ADDR and CTRL.  */
+  for (i = 0; i < num_regs; ++i)
+    if (dr_addr_p[i] == addr && dr_ctrl_p[i] == ctrl)
+      {
+	gdb_assert (dr_ref_count[i] != 0);
+	break;
+      }
+
+  /* Not found.  */
+  if (i == num_regs)
+    return -1;
+
+  /* Clear our cache.  */
+  if (--dr_ref_count[i] == 0)
+    {
+      /* Clear the enable bit.  */
+      ctrl &= ~1;
+      dr_addr_p[i] = 0;
+      dr_ctrl_p[i] = ctrl;
+      /* Notify the change.  */
+      aarch64_notify_debug_reg_change (state, is_watchpoint, i);
+    }
+
+  return 0;
+}
+
+static int
+aarch64_handle_breakpoint (enum target_point_type type, CORE_ADDR addr,
+			   int len, int is_insert)
+{
+  struct aarch64_debug_reg_state *state;
+
+  /* The hardware breakpoint on AArch64 should always be 4-byte aligned.  */
+  if (!aarch64_point_is_aligned (0 /* is_watchpoint */ , addr, len))
+    {
+      gdb_assert (0);
+      return -1;
+    }
+
+  state = aarch64_get_debug_reg_state ();
+
+  if (is_insert)
+    return aarch64_dr_state_insert_one_point (state, type, addr, len);
+  else
+    return aarch64_dr_state_remove_one_point (state, type, addr, len);
+}
+
+/* This is essentially the same as aarch64_handle_breakpoint, apart from
+   that it is an aligned watchpoint to be handled.  */
+
+static int
+aarch64_handle_aligned_watchpoint (enum target_point_type type,
+				   CORE_ADDR addr, int len, int is_insert)
+{
+  struct aarch64_debug_reg_state *state;
+
+  state = aarch64_get_debug_reg_state ();
+
+  if (is_insert)
+    return aarch64_dr_state_insert_one_point (state, type, addr, len);
+  else
+    return aarch64_dr_state_remove_one_point (state, type, addr, len);
+}
+
+/* Insert/remove unaligned watchpoint by calling aarch64_align_watchpoint
+   repeatedly until the whole watched region, as represented by ADDR and LEN,
+   has been properly aligned and ready to be written to one or more hardware
+   watchpoint registers.  IS_INSERT indicates whether this is an insertion or
+   a deletion.
+   Return 0 if succeed.  */
+
+static int
+aarch64_handle_unaligned_watchpoint (enum target_point_type type,
+				     CORE_ADDR addr, int len, int is_insert)
+{
+  struct aarch64_debug_reg_state *state;
+
+  state = aarch64_get_debug_reg_state ();
+
+  while (len > 0)
+    {
+      CORE_ADDR aligned_addr;
+      int aligned_len, ret;
+
+      aarch64_align_watchpoint (addr, len, &aligned_addr, &aligned_len,
+				&addr, &len);
+
+      if (is_insert)
+	ret = aarch64_dr_state_insert_one_point (state, type, aligned_addr,
+						 aligned_len);
+      else
+	ret = aarch64_dr_state_remove_one_point (state, type, aligned_addr,
+						 aligned_len);
+
+      if (debug_hw_points)
+	fprintf (stderr,
+ "handle_unaligned_watchpoint: is_insert: %d\n"
+ "                             aligned_addr: 0x%llx, aligned_len: %d\n"
+ "                                next_addr: 0x%llx,    next_len: %d\n",
+		 is_insert, aligned_addr, aligned_len, addr, len);
+
+      if (ret != 0)
+	return ret;
+    }
+
+  return 0;
+}
+
+static int
+aarch64_handle_watchpoint (enum target_point_type type, CORE_ADDR addr,
+			   int len, int is_insert)
+{
+  if (aarch64_point_is_aligned (1 /* is_watchpoint */ , addr, len))
+    return aarch64_handle_aligned_watchpoint (type, addr, len, is_insert);
+  else
+    return aarch64_handle_unaligned_watchpoint (type, addr, len, is_insert);
+}
+
+/* Insert a hardware breakpoint/watchpoint.
+   It actually only records the info of the to-be-inserted bp/wp;
+   the actual insertion will happen when threads are resumed.
+
+   Return 0 if succeed;
+   Return 1 if TYPE is unsupported type;
+   Return -1 if an error occurs.  */
+
+static int
+aarch64_insert_point (char type, CORE_ADDR addr, int len)
+{
+  int ret;
+  enum target_point_type targ_type;
+
+  if (debug_hw_points)
+    fprintf (stderr, "insert_point on entry (addr=0x%08lx, len=%d)\n",
+	     (unsigned long) addr, len);
+
+  /* Determine the type from the packet.  */
+  targ_type = Z_packet_to_point_type (type);
+  if (targ_type == point_type_unsupported)
+    return 1;
+
+  if (targ_type != hw_execute)
+    ret =
+      aarch64_handle_watchpoint (targ_type, addr, len, 1 /* is_insert */);
+  else
+    ret =
+      aarch64_handle_breakpoint (targ_type, addr, len, 1 /* is_insert */);
+
+  if (debug_hw_points > 1)
+    aarch64_show_debug_reg_state (aarch64_get_debug_reg_state (),
+				  "insert_point", addr, len, targ_type);
+
+  return ret;
+}
+
+/* Remove a hardware breakpoint/watchpoint.
+   It actually only records the info of the to-be-removed bp/wp,
+   the actual removal will be done when threads are resumed.
+
+   Return 0 if succeed;
+   Return 1 if TYPE is an unsupported type;
+   Return -1 if an error occurs.  */
+
+static int
+aarch64_remove_point (char type, CORE_ADDR addr, int len)
+{
+  int ret;
+  enum target_point_type targ_type;
+
+  if (debug_hw_points)
+    fprintf (stderr, "remove_point on entry (addr=0x%08lx, len=%d)\n",
+	     (unsigned long) addr, len);
+
+  /* Determine the type from the packet.  */
+  targ_type = Z_packet_to_point_type (type);
+  if (targ_type == point_type_unsupported)
+    return 1;
+
+  /* Set up state pointers.  */
+  if (targ_type != hw_execute)
+    ret =
+      aarch64_handle_watchpoint (targ_type, addr, len, 0 /* is_insert */);
+  else
+    ret =
+      aarch64_handle_breakpoint (targ_type, addr, len, 0 /* is_insert */);
+
+  if (debug_hw_points > 1)
+    aarch64_show_debug_reg_state (aarch64_get_debug_reg_state (),
+				  "remove_point", addr, len, targ_type);
+
+  return ret;
+}
+
+/* Returns the address associated with the watchpoint that hit, if any;
+   returns 0 otherwise.  */
+
+static CORE_ADDR
+aarch64_stopped_data_address (void)
+{
+  siginfo_t siginfo;
+  int pid;
+
+  pid = lwpid_of (get_thread_lwp (current_inferior));
+
+  /* Get the siginfo.  */
+  if (ptrace (PTRACE_GETSIGINFO, pid, NULL, &siginfo) != 0)
+    return (CORE_ADDR) 0;
+
+  /* Need to be a hardware breakpoint/watchpoint trap.  */
+  if ((siginfo.si_signo != SIGTRAP) ||
+      ((siginfo.si_code & 0xffff) != TRAP_HWBKPT))
+    return (CORE_ADDR) 0;
+
+  /* Breakpoints are identified using positive numbers whilst watchpoints
+     are negative.  This is the same as the indexes used in the ptrace
+     PTRACE_SETHBPREGS call.  */
+  if (siginfo.si_errno >= 0)
+    return (CORE_ADDR) 0;
+
+  return (CORE_ADDR) siginfo.si_addr;
+}
+
+/* Returns 1 if target was stopped due to a watchpoint hit, 0 otherwise.  */
+
+static int
+aarch64_stopped_by_watchpoint (void)
+{
+  if (aarch64_stopped_data_address () != 0)
+    return 1;
+  else
+    return 0;
+}
+
+/* We only place breakpoints in empty marker functions, and thread locking
+   is outside of the function.  So rather than importing software single-step,
+   we can just run until exit.  */
+static CORE_ADDR
+aarch64_reinsert_addr ()
+{
+  struct regcache *regcache = get_thread_regcache (current_inferior, 1);
+  unsigned long pc;
+  collect_register_by_name (regcache, "x30", &pc);
+  return pc;
+}
+
+/* Fetch the thread-local storage pointer for libthread_db.  */
+
+ps_err_e
+ps_get_thread_area (const struct ps_prochandle * ph,
+		    lwpid_t lwpid, int idx, void **base)
+{
+  if (ptrace (PTRACE_GET_THREAD_AREA, lwpid, NULL, base) != 0)
+    return PS_ERR;
+
+  /* IDX is the bias from the thread pointer to the beginning of the
+     thread descriptor.  It has to be subtracted due to implementation
+     quirks in libthread_db.  */
+  *base = (void *) ((char *) *base - idx);
+
+  return PS_OK;
+}
+
+/* Called when a new process is created.  */
+
+static struct arch_process_info *
+aarch64_linux_new_process (void)
+{
+  struct arch_process_info *info = xcalloc (1, sizeof (*info));
+
+  aarch64_init_debug_reg_state (&info->debug_reg_state);
+
+  return info;
+}
+
+/* Called when a new thread is detected.  */
+
+static struct arch_lwp_info *
+aarch64_linux_new_thread (void)
+{
+  struct arch_lwp_info *info = xcalloc (1, sizeof (*info));
+
+  /* Mark that all the hardware breakpoint/watchpoint register pairs
+     for this thread need to be initialized (with data from
+     aarch_process_info.debug_reg_state).  */
+  DR_MARK_ALL_CHANGED (info->dr_changed_bp, aarch64_num_bp_regs);
+  DR_MARK_ALL_CHANGED (info->dr_changed_wp, aarch64_num_wp_regs);
+
+  return info;
+}
+
+/* Called when resuming a thread.
+   If the debug regs have changed, update the thread's copies.  */
+
+static void
+aarch64_linux_prepare_to_resume (struct lwp_info *lwp)
+{
+  ptid_t ptid = ptid_of (lwp);
+  struct arch_lwp_info *info = lwp->arch_private;
+
+  if (DR_HAS_CHANGED (info->dr_changed_bp)
+      || DR_HAS_CHANGED (info->dr_changed_wp))
+    {
+      int pid = ptid_get_pid (ptid);
+      struct process_info *proc = find_process_pid (pid);
+      struct aarch64_debug_reg_state *state
+	= &proc->private->arch_private->debug_reg_state;
+
+      if (debug_hw_points)
+	fprintf (stderr, "prepare_to_resume thread %ld\n", lwpid_of (lwp));
+
+      aarch64_linux_update_debug_regs (ptid, info, state);
+
+      DR_CLEAR_CHANGED (info->dr_changed_bp);
+      DR_CLEAR_CHANGED (info->dr_changed_wp);
+    }
+}
+
+/* ptrace hardware breakpoint resource info is formatted as follows:
+
+   31             24             16               8              0
+   +---------------+--------------+---------------+---------------+
+   |  DEBUG_ARCH   |   RESERVED   |    NUM_WPS    |    NUM_BPS    |
+   +---------------+--------------+---------------+---------------+
+
+   Hardware breakpoints/watchpoints are exposed by the kernel as
+   collection of virtual registers.  Breakpoints are identified using
+   positive numbers whilst watchpoints are negative.  The registers are
+   laid out as pairs of (address, control).  Register 0 is reserved for
+   describing resource information.  */
+
+#define AARCH64_DEBUG_NUM_BPS(x) (((x) >> 0) & 0xff)
+#define AARCH64_DEBUG_NUM_WPS(x) (((x) >> 8) & 0xff)
+#define AARCH64_DEBUG_ARCH(x) (((x) >> 24) & 0xff)
+#define AARCH64_DEBUG_ARCH_V8 0x6
+
+static void
+aarch64_arch_setup (void)
+{
+  int pid;
+  unsigned int dr_info;
+
+  init_registers_aarch64 ();
+
+  pid = lwpid_of (get_thread_lwp (current_inferior));
+  if (ptrace (PTRACE_GETHBPREGS, pid, NULL, &dr_info) == 0
+      && AARCH64_DEBUG_ARCH (dr_info) == AARCH64_DEBUG_ARCH_V8)
+    {
+      aarch64_num_bp_regs = AARCH64_DEBUG_NUM_BPS (dr_info);
+      aarch64_num_wp_regs = AARCH64_DEBUG_NUM_WPS (dr_info);
+
+      if (aarch64_num_bp_regs > AARCH64_HBP_MAX_NUM)
+	error ("arch_setup fails: num_bp_regs (%d) exceeds the maximum (%d)",
+	       aarch64_num_bp_regs, AARCH64_HBP_MAX_NUM);
+      if (aarch64_num_wp_regs > AARCH64_HWP_MAX_NUM)
+	error ("arch_setup fails: num_bp_regs (%d) exceeds the maximum (%d)",
+	       aarch64_num_wp_regs, AARCH64_HWP_MAX_NUM);
+    }
+  else
+    {
+      error ("arch_setup fails: unable to get debug register resource info");
+    }
+}
+
+struct regset_info target_regsets[] = {
+  { PTRACE_GETREGSET, PTRACE_SETREGSET, NT_PRSTATUS,
+    sizeof (struct user_pt_regs), GENERAL_REGS,
+    aarch64_fill_gregset, aarch64_store_gregset },
+  { PTRACE_GETREGSET, PTRACE_SETREGSET, NT_FPREGSET,
+    sizeof (struct user_fpsimd_state), FP_REGS,
+    aarch64_fill_fpregset, aarch64_store_fpregset
+  },
+  { 0, 0, 0, -1, -1, NULL, NULL }
+};
+
+struct linux_target_ops the_low_target = {
+  aarch64_arch_setup,
+  AARCH64_NUM_REGS,
+  aarch64_regmap,
+  NULL,
+  aarch64_cannot_fetch_register,
+  aarch64_cannot_store_register,
+  NULL,
+  aarch64_get_pc,
+  aarch64_set_pc,
+  (const unsigned char *) &aarch64_breakpoint,
+  aarch64_breakpoint_len,
+  aarch64_reinsert_addr,
+  0,
+  aarch64_breakpoint_at,
+  aarch64_insert_point,
+  aarch64_remove_point,
+  aarch64_stopped_by_watchpoint,
+  aarch64_stopped_data_address,
+  NULL,
+  NULL,
+  NULL,
+  aarch64_linux_new_process,
+  aarch64_linux_new_thread,
+  aarch64_linux_prepare_to_resume,
+};
diff --git a/gdb/gdbserver/linux-low.c b/gdb/gdbserver/linux-low.c
index a476031..b6d9688 100644
--- a/gdb/gdbserver/linux-low.c
+++ b/gdb/gdbserver/linux-low.c
@@ -445,7 +445,8 @@ handle_extended_wait (struct lwp_info *event_child, int wstat)
       unsigned long new_pid;
       int ret, status;
 
-      ptrace (PTRACE_GETEVENTMSG, lwpid_of (event_child), 0, &new_pid);
+      ptrace (PTRACE_GETEVENTMSG, lwpid_of (event_child), (PTRACE_ARG3_TYPE) 0,
+	      &new_pid);
 
       /* If we haven't already seen the new PID stop, wait for it now.  */
       if (!pull_pid_from_list (&stopped_pids, new_pid, &status))
@@ -641,7 +642,7 @@ linux_create_inferior (char *program, char **allargs)
 
   if (pid == 0)
     {
-      ptrace (PTRACE_TRACEME, 0, 0, 0);
+      ptrace (PTRACE_TRACEME, 0, (PTRACE_ARG3_TYPE) 0, (PTRACE_ARG4_TYPE) 0);
 
 #ifndef __ANDROID__ /* Bionic doesn't use SIGRTMIN the way glibc does.  */
       signal (__SIGRTMIN + 1, SIG_DFL);
@@ -701,7 +702,8 @@ linux_attach_lwp_1 (unsigned long lwpid, int initial)
   ptid_t ptid;
   struct lwp_info *new_lwp;
 
-  if (ptrace (PTRACE_ATTACH, lwpid, 0, 0) != 0)
+  if (ptrace (PTRACE_ATTACH, lwpid, (PTRACE_ARG3_TYPE) 0, (PTRACE_ARG4_TYPE) 0)
+      != 0)
     {
       struct buffer buffer;
 
@@ -767,7 +769,7 @@ linux_attach_lwp_1 (unsigned long lwpid, int initial)
       /* Finally, resume the stopped process.  This will deliver the
 	 SIGSTOP (or a higher priority signal, just like normal
 	 PTRACE_ATTACH), which we'll catch later on.  */
-      ptrace (PTRACE_CONT, lwpid, 0, 0);
+      ptrace (PTRACE_CONT, lwpid, (PTRACE_ARG3_TYPE) 0, (PTRACE_ARG4_TYPE) 0);
     }
 
   /* The next time we wait for this LWP we'll see a SIGSTOP as PTRACE_ATTACH
@@ -958,7 +960,7 @@ linux_kill_one_lwp (struct lwp_info *lwp)
 	     errno ? strerror (errno) : "OK");
 
   errno = 0;
-  ptrace (PTRACE_KILL, pid, 0, 0);
+  ptrace (PTRACE_KILL, pid, (PTRACE_ARG3_TYPE) 0, (PTRACE_ARG4_TYPE) 0);
   if (debug_threads)
     fprintf (stderr,
 	     "LKL:  PTRACE_KILL %s, 0, 0 (%s)\n",
@@ -1172,7 +1174,7 @@ linux_detach_one_lwp (struct inferior_list_entry *entry, void *args)
   /* Finally, let it resume.  */
   if (the_low_target.prepare_to_resume != NULL)
     the_low_target.prepare_to_resume (lwp);
-  if (ptrace (PTRACE_DETACH, lwpid_of (lwp), 0,
+  if (ptrace (PTRACE_DETACH, lwpid_of (lwp), (PTRACE_ARG3_TYPE) 0,
 	      (PTRACE_ARG4_TYPE) (long) sig) < 0)
     error (_("Can't detach %s: %s"),
 	   target_pid_to_str (ptid_of (lwp)),
@@ -1603,13 +1605,15 @@ Checking whether LWP %ld needs to move out of the jump pad...it does\n",
 		   || WSTOPSIG (*wstat) == SIGFPE
 		   || WSTOPSIG (*wstat) == SIGBUS
 		   || WSTOPSIG (*wstat) == SIGSEGV)
-		  && ptrace (PTRACE_GETSIGINFO, lwpid_of (lwp), 0, &info) == 0
+		  && ptrace (PTRACE_GETSIGINFO, lwpid_of (lwp),
+			     (PTRACE_ARG3_TYPE) 0, &info) == 0
 		  /* Final check just to make sure we don't clobber
 		     the siginfo of non-kernel-sent signals.  */
 		  && (uintptr_t) info.si_addr == lwp->stop_pc)
 		{
 		  info.si_addr = (void *) (uintptr_t) status.tpoint_addr;
-		  ptrace (PTRACE_SETSIGINFO, lwpid_of (lwp), 0, &info);
+		  ptrace (PTRACE_SETSIGINFO, lwpid_of (lwp),
+			  (PTRACE_ARG3_TYPE) 0, &info);
 		}
 
 	      regcache = get_thread_regcache (get_lwp_thread (lwp), 1);
@@ -1704,7 +1708,8 @@ Deferring signal %d for LWP %ld.\n", WSTOPSIG (*wstat), lwpid_of (lwp));
   p_sig->prev = lwp->pending_signals_to_report;
   p_sig->signal = WSTOPSIG (*wstat);
   memset (&p_sig->info, 0, sizeof (siginfo_t));
-  ptrace (PTRACE_GETSIGINFO, lwpid_of (lwp), 0, &p_sig->info);
+  ptrace (PTRACE_GETSIGINFO, lwpid_of (lwp), (PTRACE_ARG3_TYPE) 0,
+	  &p_sig->info);
 
   lwp->pending_signals_to_report = p_sig;
 }
@@ -1725,7 +1730,8 @@ dequeue_one_deferred_signal (struct lwp_info *lwp, int *wstat)
 
       *wstat = W_STOPCODE ((*p_sig)->signal);
       if ((*p_sig)->info.si_signo != 0)
-	ptrace (PTRACE_SETSIGINFO, lwpid_of (lwp), 0, &(*p_sig)->info);
+	ptrace (PTRACE_SETSIGINFO, lwpid_of (lwp), (PTRACE_ARG3_TYPE) 0,
+		&(*p_sig)->info);
       free (*p_sig);
       *p_sig = NULL;
 
@@ -2595,7 +2601,8 @@ Check if we're already there.\n",
 	fprintf (stderr, "Ignored signal %d for LWP %ld.\n",
 		 WSTOPSIG (w), lwpid_of (event_child));
 
-      if (ptrace (PTRACE_GETSIGINFO, lwpid_of (event_child), 0, &info) == 0)
+      if (ptrace (PTRACE_GETSIGINFO, lwpid_of (event_child),
+		  (PTRACE_ARG3_TYPE) 0, &info) == 0)
 	info_p = &info;
       else
 	info_p = NULL;
@@ -3275,7 +3282,8 @@ lwp %ld wants to get out of fast tracepoint jump pad single-stepping\n",
 
       signal = (*p_sig)->signal;
       if ((*p_sig)->info.si_signo != 0)
-	ptrace (PTRACE_SETSIGINFO, lwpid_of (lwp), 0, &(*p_sig)->info);
+	ptrace (PTRACE_SETSIGINFO, lwpid_of (lwp), (PTRACE_ARG3_TYPE) 0,
+		&(*p_sig)->info);
 
       free (*p_sig);
       *p_sig = NULL;
@@ -3290,7 +3298,8 @@ lwp %ld wants to get out of fast tracepoint jump pad single-stepping\n",
   lwp->stopped = 0;
   lwp->stopped_by_watchpoint = 0;
   lwp->stepping = step;
-  ptrace (step ? PTRACE_SINGLESTEP : PTRACE_CONT, lwpid_of (lwp), 0,
+  ptrace (step ? PTRACE_SINGLESTEP : PTRACE_CONT, lwpid_of (lwp),
+	  (PTRACE_ARG3_TYPE) 0,
 	  /* Coerce to a uintptr_t first to avoid potential gcc warning
 	     of coercing an 8 byte integer to a 4 byte pointer.  */
 	  (PTRACE_ARG4_TYPE) (uintptr_t) signal);
@@ -3758,7 +3767,8 @@ linux_resume_one_thread (struct inferior_list_entry *entry, void *arg)
 	     PTRACE_SETSIGINFO.  */
 	  if (WIFSTOPPED (lwp->last_status)
 	      && WSTOPSIG (lwp->last_status) == lwp->resume->sig)
-	    ptrace (PTRACE_GETSIGINFO, lwpid_of (lwp), 0, &p_sig->info);
+	    ptrace (PTRACE_GETSIGINFO, lwpid_of (lwp), (PTRACE_ARG3_TYPE) 0,
+		    &p_sig->info);
 
 	  lwp->pending_signals = p_sig;
 	}
@@ -3983,7 +3993,6 @@ unstop_all_lwps (int unsuspend, struct lwp_info *except)
     find_inferior (&all_lwps, proceed_one_lwp, except);
 }
 
-
 #ifdef HAVE_LINUX_REGSETS
 
 #define use_linux_regsets 1
@@ -4219,7 +4228,7 @@ fetch_register (struct regcache *regcache, int regno)
 	ptrace (PTRACE_PEEKUSER, pid,
 		/* Coerce to a uintptr_t first to avoid potential gcc warning
 		   of coercing an 8 byte integer to a 4 byte pointer.  */
-		(PTRACE_ARG3_TYPE) (uintptr_t) regaddr, 0);
+		(PTRACE_ARG3_TYPE) (uintptr_t) regaddr, (PTRACE_ARG4_TYPE) 0);
       regaddr += sizeof (PTRACE_XFER_TYPE);
       if (errno != 0)
 	error ("reading register %d: %s", regno, strerror (errno));
@@ -4447,7 +4456,8 @@ linux_read_memory (CORE_ADDR memaddr, unsigned char *myaddr, int len)
       /* Coerce the 3rd arg to a uintptr_t first to avoid potential gcc warning
 	 about coercing an 8 byte integer to a 4 byte pointer.  */
       buffer[i] = ptrace (PTRACE_PEEKTEXT, pid,
-			  (PTRACE_ARG3_TYPE) (uintptr_t) addr, 0);
+			  (PTRACE_ARG3_TYPE) (uintptr_t) addr,
+			  (PTRACE_ARG4_TYPE) 0);
       if (errno)
 	break;
     }
@@ -4490,15 +4500,15 @@ linux_write_memory (CORE_ADDR memaddr, const unsigned char *myaddr, int len)
   if (debug_threads)
     {
       /* Dump up to four bytes.  */
-      unsigned int val = * (unsigned int *) myaddr;
+      unsigned val = * (unsigned *) myaddr;
       if (len == 1)
 	val = val & 0xff;
       else if (len == 2)
 	val = val & 0xffff;
       else if (len == 3)
 	val = val & 0xffffff;
-      fprintf (stderr, "Writing %0*x to 0x%08lx\n", 2 * ((len < 4) ? len : 4),
-	       val, (long)memaddr);
+      fprintf (stderr, "Writing len=%d 0x%0*x to 0x%08lx\n", len,
+	       2 * ((len < 4) ? len : 4), val, (long) memaddr);
     }
 
   /* Fill start and end extra bytes of buffer with existing memory data.  */
@@ -4507,7 +4517,8 @@ linux_write_memory (CORE_ADDR memaddr, const unsigned char *myaddr, int len)
   /* Coerce the 3rd arg to a uintptr_t first to avoid potential gcc warning
      about coercing an 8 byte integer to a 4 byte pointer.  */
   buffer[0] = ptrace (PTRACE_PEEKTEXT, pid,
-		      (PTRACE_ARG3_TYPE) (uintptr_t) addr, 0);
+		      (PTRACE_ARG3_TYPE) (uintptr_t) addr,
+		      (PTRACE_ARG4_TYPE) 0);
   if (errno)
     return errno;
 
@@ -4520,7 +4531,7 @@ linux_write_memory (CORE_ADDR memaddr, const unsigned char *myaddr, int len)
 		     about coercing an 8 byte integer to a 4 byte pointer.  */
 		  (PTRACE_ARG3_TYPE) (uintptr_t) (addr + (count - 1)
 						  * sizeof (PTRACE_XFER_TYPE)),
-		  0);
+		  (PTRACE_ARG4_TYPE) 0);
       if (errno)
 	return errno;
     }
@@ -4556,7 +4567,8 @@ linux_enable_event_reporting (int pid)
   if (!linux_supports_tracefork_flag)
     return;
 
-  ptrace (PTRACE_SETOPTIONS, pid, 0, (PTRACE_ARG4_TYPE) PTRACE_O_TRACECLONE);
+  ptrace (PTRACE_SETOPTIONS, pid, (PTRACE_ARG3_TYPE) 0,
+	  (PTRACE_ARG4_TYPE) PTRACE_O_TRACECLONE);
 }
 
 /* Helper functions for linux_test_for_tracefork, called via clone ().  */
@@ -4572,7 +4584,7 @@ linux_tracefork_grandchild (void *arg)
 static int
 linux_tracefork_child (void *arg)
 {
-  ptrace (PTRACE_TRACEME, 0, 0, 0);
+  ptrace (PTRACE_TRACEME, 0, (PTRACE_ARG3_TYPE) 0, (PTRACE_ARG4_TYPE) 0);
   kill (getpid (), SIGSTOP);
 
 #if !(defined(__UCLIBC__) && defined(HAS_NOMMU))
@@ -4640,28 +4652,33 @@ linux_test_for_tracefork (void)
   if (! WIFSTOPPED (status))
     error ("linux_test_for_tracefork: waitpid: unexpected status %d.", status);
 
-  ret = ptrace (PTRACE_SETOPTIONS, child_pid, 0,
+  ret = ptrace (PTRACE_SETOPTIONS, child_pid, (PTRACE_ARG3_TYPE) 0,
 		(PTRACE_ARG4_TYPE) PTRACE_O_TRACEFORK);
   if (ret != 0)
     {
-      ret = ptrace (PTRACE_KILL, child_pid, 0, 0);
+      ret = ptrace (PTRACE_KILL, child_pid, (PTRACE_ARG3_TYPE) 0,
+		    (PTRACE_ARG4_TYPE) 0);
       if (ret != 0)
 	{
 	  warning ("linux_test_for_tracefork: failed to kill child");
 	  return;
 	}
 
+      status = 0;
       ret = my_waitpid (child_pid, &status, 0);
       if (ret != child_pid)
 	warning ("linux_test_for_tracefork: failed to wait for killed child");
       else if (!WIFSIGNALED (status))
-	warning ("linux_test_for_tracefork: unexpected wait status 0x%x from "
-		 "killed child", status);
+	warning ("linux_test_for_tracefork: unexpected wait status "
+		 "0x%x waitint=0x%x x=0x%x x=0x%x from killed child",
+		 status, __WAIT_INT (status), WIFSIGNALED (status),
+		 ((signed char)(((__WAIT_INT (status) & 0x7f) + 1) >> 1)) > 0);
 
       return;
     }
 
-  ret = ptrace (PTRACE_CONT, child_pid, 0, 0);
+  ret = ptrace (PTRACE_CONT, child_pid, (PTRACE_ARG3_TYPE) 0,
+		(PTRACE_ARG4_TYPE) 0);
   if (ret != 0)
     warning ("linux_test_for_tracefork: failed to resume child");
 
@@ -4671,14 +4688,16 @@ linux_test_for_tracefork (void)
       && status >> 16 == PTRACE_EVENT_FORK)
     {
       second_pid = 0;
-      ret = ptrace (PTRACE_GETEVENTMSG, child_pid, 0, &second_pid);
+      ret = ptrace (PTRACE_GETEVENTMSG, child_pid, (PTRACE_ARG3_TYPE) 0,
+		    &second_pid);
       if (ret == 0 && second_pid != 0)
 	{
 	  int second_status;
 
 	  linux_supports_tracefork_flag = 1;
 	  my_waitpid (second_pid, &second_status, 0);
-	  ret = ptrace (PTRACE_KILL, second_pid, 0, 0);
+	  ret = ptrace (PTRACE_KILL, second_pid, (PTRACE_ARG3_TYPE) 0,
+			(PTRACE_ARG4_TYPE) 0);
 	  if (ret != 0)
 	    warning ("linux_test_for_tracefork: failed to kill second child");
 	  my_waitpid (second_pid, &status, 0);
@@ -4690,7 +4709,8 @@ linux_test_for_tracefork (void)
 
   do
     {
-      ret = ptrace (PTRACE_KILL, child_pid, 0, 0);
+      ret = ptrace (PTRACE_KILL, child_pid, (PTRACE_ARG3_TYPE) 0,
+		    (PTRACE_ARG4_TYPE) 0);
       if (ret != 0)
 	warning ("linux_test_for_tracefork: failed to kill child");
       my_waitpid (child_pid, &status, 0);
@@ -4837,9 +4857,12 @@ linux_read_offsets (CORE_ADDR *text_p, CORE_ADDR *data_p)
 
   errno = 0;
 
-  text = ptrace (PTRACE_PEEKUSER, pid, (long)PT_TEXT_ADDR, 0);
-  text_end = ptrace (PTRACE_PEEKUSER, pid, (long)PT_TEXT_END_ADDR, 0);
-  data = ptrace (PTRACE_PEEKUSER, pid, (long)PT_DATA_ADDR, 0);
+  text = ptrace (PTRACE_PEEKUSER, pid, (PTRACE_ARG3_TYPE) PT_TEXT_ADDR,
+		 (PTRACE_ARG4_TYPE) 0);
+  text_end = ptrace (PTRACE_PEEKUSER, pid, (PTRACE_ARG3_TYPE) PT_TEXT_END_ADDR,
+		     (PTRACE_ARG4_TYPE) 0);
+  data = ptrace (PTRACE_PEEKUSER, pid, (PTRACE_ARG3_TYPE) PT_DATA_ADDR,
+		 (PTRACE_ARG4_TYPE) 0);
 
   if (errno == 0)
     {
@@ -4913,7 +4936,7 @@ linux_xfer_siginfo (const char *annex, unsigned char *readbuf,
   if (offset >= sizeof (siginfo))
     return -1;
 
-  if (ptrace (PTRACE_GETSIGINFO, pid, 0, &siginfo) != 0)
+  if (ptrace (PTRACE_GETSIGINFO, pid, (PTRACE_ARG3_TYPE) 0, &siginfo) != 0)
     return -1;
 
   /* When GDBSERVER is built as a 64-bit application, ptrace writes into
@@ -4934,7 +4957,7 @@ linux_xfer_siginfo (const char *annex, unsigned char *readbuf,
       /* Convert back to ptrace layout before flushing it out.  */
       siginfo_fixup (&siginfo, inf_siginfo, 1);
 
-      if (ptrace (PTRACE_SETSIGINFO, pid, 0, &siginfo) != 0)
+      if (ptrace (PTRACE_SETSIGINFO, pid, (PTRACE_ARG3_TYPE) 0, &siginfo) != 0)
 	return -1;
     }
 
diff --git a/gdb/osabi.c b/gdb/osabi.c
index faffe30..6eb4a2c 100644
--- a/gdb/osabi.c
+++ b/gdb/osabi.c
@@ -73,6 +73,7 @@ static const char * const gdb_osabi_names[] =
   "Darwin",
   "Symbian",
   "OpenVMS",
+  "Newlib",
 
   "<invalid>"
 };
diff --git a/gdb/features/aarch64.c b/gdb/features/aarch64.c
new file mode 100644
index 0000000..1e9a99d
--- /dev/null
+++ b/gdb/features/aarch64.c
@@ -0,0 +1,174 @@
+/* THIS FILE IS GENERATED.  -*- buffer-read-only: t -*- vi:set ro:
+  Original: aarch64.xml */
+
+#include "defs.h"
+#include "osabi.h"
+#include "target-descriptions.h"
+
+struct target_desc *tdesc_aarch64;
+static void
+initialize_tdesc_aarch64 (void)
+{
+  struct target_desc *result = allocate_target_description ();
+  struct tdesc_feature *feature;
+  struct tdesc_type *field_type;
+  struct tdesc_type *type;
+
+  set_tdesc_architecture (result, bfd_scan_arch ("aarch64"));
+
+  feature = tdesc_create_feature (result, "org.gnu.gdb.aarch64.core");
+  tdesc_create_reg (feature, "x0", 0, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x1", 1, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x2", 2, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x3", 3, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x4", 4, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x5", 5, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x6", 6, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x7", 7, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x8", 8, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x9", 9, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x10", 10, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x11", 11, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x12", 12, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x13", 13, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x14", 14, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x15", 15, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x16", 16, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x17", 17, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x18", 18, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x19", 19, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x20", 20, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x21", 21, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x22", 22, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x23", 23, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x24", 24, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x25", 25, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x26", 26, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x27", 27, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x28", 28, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x29", 29, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x30", 30, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "sp", 31, 1, NULL, 64, "data_ptr");
+  tdesc_create_reg (feature, "pc", 32, 1, NULL, 64, "code_ptr");
+  tdesc_create_reg (feature, "cpsr", 33, 1, NULL, 32, "int");
+
+  feature = tdesc_create_feature (result, "org.gnu.gdb.aarch64.fpu");
+  field_type = tdesc_named_type (feature, "ieee_double");
+  tdesc_create_vector (feature, "v2d", field_type, 2);
+
+  field_type = tdesc_named_type (feature, "uint64");
+  tdesc_create_vector (feature, "v2u", field_type, 2);
+
+  field_type = tdesc_named_type (feature, "int64");
+  tdesc_create_vector (feature, "v2i", field_type, 2);
+
+  field_type = tdesc_named_type (feature, "ieee_single");
+  tdesc_create_vector (feature, "v4f", field_type, 4);
+
+  field_type = tdesc_named_type (feature, "uint32");
+  tdesc_create_vector (feature, "v4u", field_type, 4);
+
+  field_type = tdesc_named_type (feature, "int32");
+  tdesc_create_vector (feature, "v4i", field_type, 4);
+
+  field_type = tdesc_named_type (feature, "uint16");
+  tdesc_create_vector (feature, "v8u", field_type, 8);
+
+  field_type = tdesc_named_type (feature, "int16");
+  tdesc_create_vector (feature, "v8i", field_type, 8);
+
+  field_type = tdesc_named_type (feature, "uint8");
+  tdesc_create_vector (feature, "v16u", field_type, 16);
+
+  field_type = tdesc_named_type (feature, "int8");
+  tdesc_create_vector (feature, "v16i", field_type, 16);
+
+  field_type = tdesc_named_type (feature, "uint128");
+  tdesc_create_vector (feature, "v1u", field_type, 1);
+
+  field_type = tdesc_named_type (feature, "int128");
+  tdesc_create_vector (feature, "v1i", field_type, 1);
+
+  type = tdesc_create_union (feature, "vnd");
+  field_type = tdesc_named_type (feature, "v2d");
+  tdesc_add_field (type, "f", field_type);
+  field_type = tdesc_named_type (feature, "v2u");
+  tdesc_add_field (type, "u", field_type);
+  field_type = tdesc_named_type (feature, "v2i");
+  tdesc_add_field (type, "s", field_type);
+
+  type = tdesc_create_union (feature, "vns");
+  field_type = tdesc_named_type (feature, "v4f");
+  tdesc_add_field (type, "f", field_type);
+  field_type = tdesc_named_type (feature, "v4u");
+  tdesc_add_field (type, "u", field_type);
+  field_type = tdesc_named_type (feature, "v4i");
+  tdesc_add_field (type, "s", field_type);
+
+  type = tdesc_create_union (feature, "vnh");
+  field_type = tdesc_named_type (feature, "v8u");
+  tdesc_add_field (type, "u", field_type);
+  field_type = tdesc_named_type (feature, "v8i");
+  tdesc_add_field (type, "s", field_type);
+
+  type = tdesc_create_union (feature, "vnb");
+  field_type = tdesc_named_type (feature, "v16u");
+  tdesc_add_field (type, "u", field_type);
+  field_type = tdesc_named_type (feature, "v16i");
+  tdesc_add_field (type, "s", field_type);
+
+  type = tdesc_create_union (feature, "vnq");
+  field_type = tdesc_named_type (feature, "v1u");
+  tdesc_add_field (type, "u", field_type);
+  field_type = tdesc_named_type (feature, "v1i");
+  tdesc_add_field (type, "s", field_type);
+
+  type = tdesc_create_union (feature, "aarch64v");
+  field_type = tdesc_named_type (feature, "vnd");
+  tdesc_add_field (type, "d", field_type);
+  field_type = tdesc_named_type (feature, "vns");
+  tdesc_add_field (type, "s", field_type);
+  field_type = tdesc_named_type (feature, "vnh");
+  tdesc_add_field (type, "h", field_type);
+  field_type = tdesc_named_type (feature, "vnb");
+  tdesc_add_field (type, "b", field_type);
+  field_type = tdesc_named_type (feature, "vnq");
+  tdesc_add_field (type, "q", field_type);
+
+  tdesc_create_reg (feature, "v0", 34, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v1", 35, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v2", 36, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v3", 37, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v4", 38, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v5", 39, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v6", 40, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v7", 41, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v8", 42, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v9", 43, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v10", 44, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v11", 45, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v12", 46, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v13", 47, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v14", 48, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v15", 49, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v16", 50, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v17", 51, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v18", 52, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v19", 53, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v20", 54, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v21", 55, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v22", 56, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v23", 57, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v24", 58, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v25", 59, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v26", 60, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v27", 61, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v28", 62, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v29", 63, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v30", 64, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "v31", 65, 1, NULL, 128, "aarch64v");
+  tdesc_create_reg (feature, "fpsr", 66, 1, NULL, 32, "int");
+  tdesc_create_reg (feature, "fpcr", 67, 1, NULL, 32, "int");
+
+  tdesc_aarch64 = result;
+}
diff --git a/gdb/regformats/aarch64-without-fpu.dat b/gdb/regformats/aarch64-without-fpu.dat
new file mode 100644
index 0000000..a38ed58
--- /dev/null
+++ b/gdb/regformats/aarch64-without-fpu.dat
@@ -0,0 +1,38 @@
+# DO NOT EDIT: generated from aarch64-without-fpu.xml
+name:aarch64_without_fpu
+xmltarget:aarch64-without-fpu.xml
+expedite:x29,sp,pc
+64:x0
+64:x1
+64:x2
+64:x3
+64:x4
+64:x5
+64:x6
+64:x7
+64:x8
+64:x9
+64:x10
+64:x11
+64:x12
+64:x13
+64:x14
+64:x15
+64:x16
+64:x17
+64:x18
+64:x19
+64:x20
+64:x21
+64:x22
+64:x23
+64:x24
+64:x25
+64:x26
+64:x27
+64:x28
+64:x29
+64:x30
+64:sp
+64:pc
+32:cpsr
diff --git a/gdb/regformats/aarch64.dat b/gdb/regformats/aarch64.dat
new file mode 100644
index 0000000..afe1028
--- /dev/null
+++ b/gdb/regformats/aarch64.dat
@@ -0,0 +1,72 @@
+# DO NOT EDIT: generated from aarch64.xml
+name:aarch64
+xmltarget:aarch64.xml
+expedite:x29,sp,pc
+64:x0
+64:x1
+64:x2
+64:x3
+64:x4
+64:x5
+64:x6
+64:x7
+64:x8
+64:x9
+64:x10
+64:x11
+64:x12
+64:x13
+64:x14
+64:x15
+64:x16
+64:x17
+64:x18
+64:x19
+64:x20
+64:x21
+64:x22
+64:x23
+64:x24
+64:x25
+64:x26
+64:x27
+64:x28
+64:x29
+64:x30
+64:sp
+64:pc
+32:cpsr
+128:v0
+128:v1
+128:v2
+128:v3
+128:v4
+128:v5
+128:v6
+128:v7
+128:v8
+128:v9
+128:v10
+128:v11
+128:v12
+128:v13
+128:v14
+128:v15
+128:v16
+128:v17
+128:v18
+128:v19
+128:v20
+128:v21
+128:v22
+128:v23
+128:v24
+128:v25
+128:v26
+128:v27
+128:v28
+128:v29
+128:v30
+128:v31
+32:fpsr
+32:fpcr
diff --git a/gdb/features/aarch64-without-fpu.c b/gdb/features/aarch64-without-fpu.c
new file mode 100644
index 0000000..dd1b029
--- /dev/null
+++ b/gdb/features/aarch64-without-fpu.c
@@ -0,0 +1,54 @@
+/* THIS FILE IS GENERATED.  -*- buffer-read-only: t -*- vi:set ro:
+  Original: aarch64-without-fpu.xml */
+
+#include "defs.h"
+#include "osabi.h"
+#include "target-descriptions.h"
+
+struct target_desc *tdesc_aarch64_without_fpu;
+static void
+initialize_tdesc_aarch64_without_fpu (void)
+{
+  struct target_desc *result = allocate_target_description ();
+  struct tdesc_feature *feature;
+
+  set_tdesc_architecture (result, bfd_scan_arch ("aarch64"));
+
+  feature = tdesc_create_feature (result, "org.gnu.gdb.aarch64.core");
+  tdesc_create_reg (feature, "x0", 0, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x1", 1, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x2", 2, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x3", 3, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x4", 4, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x5", 5, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x6", 6, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x7", 7, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x8", 8, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x9", 9, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x10", 10, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x11", 11, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x12", 12, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x13", 13, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x14", 14, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x15", 15, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x16", 16, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x17", 17, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x18", 18, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x19", 19, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x20", 20, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x21", 21, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x22", 22, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x23", 23, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x24", 24, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x25", 25, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x26", 26, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x27", 27, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x28", 28, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x29", 29, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "x30", 30, 1, NULL, 64, "int");
+  tdesc_create_reg (feature, "sp", 31, 1, NULL, 64, "data_ptr");
+  tdesc_create_reg (feature, "pc", 32, 1, NULL, 64, "code_ptr");
+  tdesc_create_reg (feature, "cpsr", 33, 1, NULL, 32, "int");
+
+  tdesc_aarch64_without_fpu = result;
+}

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]