This is the mail archive of the
binutils@sourceware.org
mailing list for the binutils project.
loadbase alignment - ld.so/prelink/kernel or bfd_elf_bfd_from_remote_memory() bug?
Hi,
bfd_elf_bfd_from_remote_memory() would almost work for general ELF files but it
fails (I have no simple testcase) now due to the ELF `Base address' alignment.
BFD uses `& -i_phdrs[i].p_align' for the VMA addresses but the real mapping
is only PAGE_SIZE aligned (illustrated in the `BEFORE' dumps below).
For x86_64:
* bfd_elf_bfd_from_remote_memory() expects `loadbase' is P_ALIGN aligned.
* GCC on x86_64 produces P_ALIGN 0x200000 == 2MB.
* ld.so loads ELF to `l_map_start' being only PAGE_SIZE aligned.
(The same problem applies to prelink and Linux kernel elf loading.)
* gELF standard in its `Base Address' computation expects the `base address' to
be `maximum page size' aligned, according to the last sentence:
http://x86.ddj.com/ftp/manuals/tools/elf.pdf
This address is truncated to the nearest multiple of the maximum page
size. The corresponding p_vaddr value itself is also truncated to the
nearest multiple of the maximum page size. The base address is the
difference between the truncated memory address and the truncated
p_vaddr value.
* x86_64 ELF standard in its `3.3.3 Page Size' talks about maximum page size
64KB (I would expect it should scale up to 2MB for the 2MB PSE pages).
http://www.x86-64.org/documentation/abi.pdf
Systems are permitted to use any power-of-two page size between 4KB and
64KB, inclusive.
According to the Roland's mail below (p_align is supposed to match "maxpage")
IMO ld.so + prelink + kernel violate the gELF standard.
Proper ld.so P_ALIGN-compliant mapping would also make possible a kernel 2MB
PSE pages optimized mapping for very large .so libraries.
Removing P_ALIGN masking from bfd_elf_bfd_from_remote_memory() workarounds it
well but I expect this problem should move to glibc + prelink + kernel, right?
Proposing ld.so patch which fixes the bfd_elf_bfd_from_remote_memory() problem.
Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000
0x00000000000004bc 0x00000000000004bc R E 200000
LOAD 0x00000000000004c0 0x00000000002004c0 0x00000000002004c0
0x00000000000001e8 0x00000000000001f8 RW 200000
BEFORE:
00400000-00401000 r-xp 00000000 08:01 4717502 /tmp/alignmain
00600000-00601000 rw-p 00000000 08:01 4717502 /tmp/alignmain
...
2aaaaaaad000-2aaaaaaae000 r-xp 00000000 08:01 4717503 /tmp/alignlib.so
2aaaaaaae000-2aaaaacad000 ---p 00001000 08:01 4717503 /tmp/alignlib.so
2aaaaacad000-2aaaaacae000 rw-p 00000000 08:01 4717503 /tmp/alignlib.so
open("./alignlib.so", O_RDONLY) = 3
...
mmap(NULL, 2098872, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x2aaaaaaad000
mprotect(0x2aaaaaaae000, 2093056, PROT_NONE) = 0
mmap(0x2aaaaacad000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0) = 0x2aaaaacad000
close(3) = 0
AFTER:
00400000-00401000 r-xp 00000000 08:01 4717502 /tmp/alignmain
00600000-00601000 rw-p 00000000 08:01 4717502 /tmp/alignmain
...
2aaaaac00000-2aaaaac01000 r-xp 00000000 08:01 4717503 /tmp/alignlib.so
2aaaaac01000-2aaaaae00000 ---p 2aaaaac01000 00:00 0
2aaaaae00000-2aaaaae01000 rw-p 00000000 08:01 4717503 /tmp/alignlib.so
open("./alignlib.so", O_RDONLY) = 3
...
mmap(NULL, 4194304, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_ANONYMOUS|MAP_DENYWRITE, -1, 0) = 0x2aaaaaaad000
munmap(0x2aaaaaaad000, 1388544) = 0
munmap(0x2aaaaae01000, 704512) = 0
mprotect(0x2aaaaac01000, 2093056, PROT_NONE) = 0
mmap(0x2aaaaac00000, 4096, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0) = 0x2aaaaac00000
mmap(0x2aaaaae00000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0) = 0x2aaaaae00000
close(3) = 0
Regards,
Jan
http://sourceware.org/ml/binutils/2007-08/msg00184.html
On Mon, 13 Aug 2007 03:15:44 +0200, Roland McGrath wrote:
...
> The file itself doesn't tell you what "maxpage" is, but p_align is
> supposed to match it.
> You can't presume the actual memory addresses
> used will be so aligned (it's the maximum page size, not the minimum).
This sentence contradicts my deduction above.
> However, in a correct ELF object's phdrs, each p_vaddr and p_offset
> must be congruent to that alignment.
...
> you need to know the actual page size. bfd_elf_bfd_from_remote_memory
> doesn't know this. (The debugger might know it by some external means.
2007-08-15 Jan Kratochvil <jan.kratochvil@redhat.com>
* elf/dl-load.c (_dl_map_object_from_fd): New variable ALIGNMAX.
New sanity check if P_ALIGN is a power of two.
New variables ALIGNEDSTART, ALIGNEDMAPPEDEND and ALIGNEDMAPLENGTH.
Map ET_DYN ELFs with the P_ALIGN compliant ELF Base address.
--- glibc-20070810T2152-orig/elf/dl-load.c 2007-08-03 17:50:24.000000000 +0200
+++ glibc-20070810T2152/elf/dl-load.c 2007-08-15 00:41:03.000000000 +0200
@@ -1012,6 +1012,7 @@ _dl_map_object_from_fd (const char *name
int prot;
} loadcmds[l->l_phnum], *c;
size_t nloadcmds = 0;
+ ElfW(Addr) alignmax = GLRO(dl_pagesize);
bool has_holes = false;
/* The struct is initialized to zero so this is not necessary:
@@ -1036,6 +1037,12 @@ _dl_map_object_from_fd (const char *name
case PT_LOAD:
/* A load command tells us to map in part of the file.
We record the load commands and process them all later. */
+ if (__builtin_expect ((ph->p_align & (ph->p_align - 1)) != 0,
+ 0))
+ {
+ errstring = N_("ELF load command alignment not a power of two");
+ goto call_lose;
+ }
if (__builtin_expect ((ph->p_align & (GLRO(dl_pagesize) - 1)) != 0,
0))
{
@@ -1049,6 +1056,8 @@ _dl_map_object_from_fd (const char *name
= N_("ELF load command address/offset not properly aligned");
goto call_lose;
}
+ if (ph->p_align > alignmax)
+ alignmax = ph->p_align;
c = &loadcmds[nloadcmds++];
c->mapstart = ph->p_vaddr & ~(GLRO(dl_pagesize) - 1);
@@ -1195,22 +1204,42 @@ cannot allocate TLS data structures for
prefer to map such objects at; but this is only a preference,
the OS can do whatever it likes. */
ElfW(Addr) mappref;
+ ElfW(Addr) alignedstart, alignedmappedend;
+ ElfW(Addr) alignedmaplength = (maplength + GLRO(dl_pagesize) - 1)
+ & -GLRO(dl_pagesize);
mappref = (ELF_PREFERRED_ADDRESS (loader, maplength,
c->mapstart & GLRO(dl_use_load_bias))
- MAP_BASE_ADDR (l));
/* Remember which part of the address space this object uses. */
- l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
+ l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref,
+ alignedmaplength + alignmax
+ - GLRO(dl_pagesize),
c->prot,
- MAP_COPY|MAP_FILE,
- fd, c->mapoff);
+ MAP_COPY|MAP_ANON,
+ ANONFD, 0);
if (__builtin_expect ((void *) l->l_map_start == MAP_FAILED, 0))
{
map_error:
errstring = N_("failed to map segment from shared object");
goto call_lose_errno;
}
-
+ /* Set the ELF `Base address' complying with all the P_ALIGNs. */
+ alignedstart = (l->l_map_start + alignmax - 1) & -alignmax;
+ alignedmappedend = l->l_map_start + alignedmaplength + alignmax
+ - GLRO(dl_pagesize);
+ if ((alignedstart != l->l_map_start
+ && __munmap ((void *) l->l_map_start,
+ alignedstart - l->l_map_start) != 0)
+ || (alignedstart + alignedmaplength != alignedmappedend
+ && __munmap ((void *) (alignedstart + alignedmaplength),
+ alignedmappedend
+ - (alignedstart + alignedmaplength)) != 0))
+ {
+ errstring = N_("failed to unmap excessive alignment memory");
+ goto call_lose_errno;
+ }
+ l->l_map_start = alignedstart;
l->l_map_end = l->l_map_start + maplength;
l->l_addr = l->l_map_start - c->mapstart;
@@ -1225,27 +1254,27 @@ cannot allocate TLS data structures for
PROT_NONE);
l->l_contiguous = 1;
-
- goto postmap;
}
-
- /* This object is loaded at a fixed address. This must never
- happen for objects loaded with dlopen(). */
- if (__builtin_expect ((mode & __RTLD_OPENEXEC) == 0, 0))
+ else
{
- errstring = N_("cannot dynamically load executable");
- goto call_lose;
- }
+ /* This object is loaded at a fixed address. This must never
+ happen for objects loaded with dlopen(). */
+ if (__builtin_expect ((mode & __RTLD_OPENEXEC) == 0, 0))
+ {
+ errstring = N_("cannot dynamically load executable");
+ goto call_lose;
+ }
- /* Notify ELF_PREFERRED_ADDRESS that we have to load this one
- fixed. */
- ELF_FIXED_ADDRESS (loader, c->mapstart);
+ /* Notify ELF_PREFERRED_ADDRESS that we have to load this one
+ fixed. */
+ ELF_FIXED_ADDRESS (loader, c->mapstart);
- /* Remember which part of the address space this object uses. */
- l->l_map_start = c->mapstart + l->l_addr;
- l->l_map_end = l->l_map_start + maplength;
- l->l_contiguous = !has_holes;
+ /* Remember which part of the address space this object uses. */
+ l->l_map_start = c->mapstart + l->l_addr;
+ l->l_map_end = l->l_map_start + maplength;
+ l->l_contiguous = !has_holes;
+ }
while (c < &loadcmds[nloadcmds])
{
@@ -1258,7 +1287,6 @@ cannot allocate TLS data structures for
== MAP_FAILED))
goto map_error;
- postmap:
if (c->prot & PROT_EXEC)
l->l_text_end = l->l_addr + c->mapend;