This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
[PING][PATCH v2][BZ #11087] Use atomic operations to track memory
- From: OndÅej BÃlka <neleai at seznam dot cz>
- To: libc-alpha at sourceware dot org
- Date: Mon, 21 Oct 2013 09:00:59 +0200
- Subject: [PING][PATCH v2][BZ #11087] Use atomic operations to track memory
- Authentication-results: sourceware.org; auth=none
- References: <20131017114140 dot GA24230 at domone dot podge> <20131018081535 dot GA5679 at domone dot podge>
ping
On Fri, Oct 18, 2013 at 10:15:35AM +0200, OndÅej BÃlka wrote:
> On Thu, Oct 17, 2013 at 01:41:40PM +0200, OndÅej BÃlka wrote:
> > Hi,
> >
> > I fixed this mostly because ulrich was wrong here in several ways.
> >
> > Calling added locking to update statistics as too expensive is nonsense
> > as this is needed only after mmap and mmap + associated minor faults
> > are much more costy.
> >
> > Also there is no locking needed, atomic add will do job well.
> >
> > This bug affects also malloc_stats.
> >
> > Comments?
> >
>
> Here is version that uses atomic* functions and tracks accurately
> maximum. OK to commit?
>
> [BZ #11087]
> * malloc/malloc.c: Accurately track mmaped memory.
>
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 1a18c3f..b814062 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -2325,12 +2325,11 @@ static void* sysmalloc(INTERNAL_SIZE_T nb, mstate av)
>
> /* update statistics */
>
> - if (++mp_.n_mmaps > mp_.max_n_mmaps)
> - mp_.max_n_mmaps = mp_.n_mmaps;
> + int old = atomic_exchange_and_add (&mp_.n_mmaps, 1);
> + atomic_max (&mp_.max_n_mmaps, old + 1);
>
> - sum = mp_.mmapped_mem += size;
> - if (sum > (unsigned long)(mp_.max_mmapped_mem))
> - mp_.max_mmapped_mem = sum;
> + sum = atomic_exchange_and_add(&mp_.mmapped_mem, size) + size;
> + atomic_max (&mp_.max_mmapped_mem, sum);
>
> check_chunk(av, p);
>
> @@ -2780,8 +2779,8 @@ munmap_chunk(mchunkptr p)
> return;
> }
>
> - mp_.n_mmaps--;
> - mp_.mmapped_mem -= total_size;
> + atomic_decrement (&mp_.n_mmaps);
> + atomic_add (&mp_.mmapped_mem, -total_size);
>
> /* If munmap failed the process virtual memory address space is in a
> bad shape. Just leave the block hanging around, the process will
> @@ -2798,6 +2797,7 @@ mremap_chunk(mchunkptr p, size_t new_size)
> size_t page_mask = GLRO(dl_pagesize) - 1;
> INTERNAL_SIZE_T offset = p->prev_size;
> INTERNAL_SIZE_T size = chunksize(p);
> + INTERNAL_SIZE_T old;
> char *cp;
>
> assert (chunk_is_mmapped(p));
> @@ -2822,10 +2822,8 @@ mremap_chunk(mchunkptr p, size_t new_size)
> assert((p->prev_size == offset));
> set_head(p, (new_size - offset)|IS_MMAPPED);
>
> - mp_.mmapped_mem -= size + offset;
> - mp_.mmapped_mem += new_size;
> - if ((unsigned long)mp_.mmapped_mem > (unsigned long)mp_.max_mmapped_mem)
> - mp_.max_mmapped_mem = mp_.mmapped_mem;
> + old = atomic_exchange_and_add (&mp_.mmapped_mem, new_size - size - offset);
> + atomic_max (&mp_.max_mmapped_mem, old + new_size - size - offset);
> return p;
> }
>
--
static from plastic slide rules