This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v2] benchtests: Add malloc microbenchmark


On 30 May 2014 10:45, Siddhesh Poyarekar <siddhesh@redhat.com> wrote:

Hi Siddhesh,

Thanks for the review!

> On Thu, Apr 17, 2014 at 01:30:35PM +0100, Will Newton wrote:
>> Add a microbenchmark for measuring malloc and free performance. The
>> benchmark allocates and frees buffers of random sizes in a random
>> order and measures the overall execution time and RSS. Variants of the
>> benchmark are run with 8, 32 and 64 threads to measure the effect of
>> concurrency on allocator performance.
>>
>> The random block sizes used follow an inverse square distribution
>> which is intended to mimic the behaviour of real applications which
>> tend to allocate many more small blocks than large ones.
>>
>> ChangeLog:
>>
>> 2014-04-15  Will Newton  <will.newton@linaro.org>
>>
>>       * benchtests/Makefile: (benchset): Add malloc benchmarks.
>>       Link threaded malloc benchmarks with libpthread.
>>       * benchtests/bench-malloc-threads-32.c: New file.
>>       * benchtests/bench-malloc-threads-64.c: Likewise.
>>       * benchtests/bench-malloc-threads-8.c: Likewise.
>>       * benchtests/bench-malloc.c: Likewise.
>> ---
>>  benchtests/Makefile                  |   7 +-
>>  benchtests/bench-malloc-threads-32.c |  20 +++
>>  benchtests/bench-malloc-threads-64.c |  20 +++
>>  benchtests/bench-malloc-threads-8.c  |  20 +++
>>  benchtests/bench-malloc.c            | 236 +++++++++++++++++++++++++++++++++++
>>  5 files changed, 302 insertions(+), 1 deletion(-)
>>  create mode 100644 benchtests/bench-malloc-threads-32.c
>>  create mode 100644 benchtests/bench-malloc-threads-64.c
>>  create mode 100644 benchtests/bench-malloc-threads-8.c
>>  create mode 100644 benchtests/bench-malloc.c
>>
>> Changes in v2:
>>  - Move random number generation out of the loop and use arrays of random
>>    values. This reduces the overhead of the benchmark loop to 10% or less.
>>
>> diff --git a/benchtests/Makefile b/benchtests/Makefile
>> index a0954cd..f38380d 100644
>> --- a/benchtests/Makefile
>> +++ b/benchtests/Makefile
>> @@ -37,9 +37,11 @@ string-bench := bcopy bzero memccpy memchr memcmp memcpy memmem memmove \
>>               strspn strstr strcpy_chk stpcpy_chk memrchr strsep strtok
>>  string-bench-all := $(string-bench)
>>
>> +malloc-bench := malloc malloc-threads-8 malloc-threads-32 malloc-threads-64
>> +
>>  stdlib-bench := strtod
>>
>> -benchset := $(string-bench-all) $(stdlib-bench)
>> +benchset := $(string-bench-all) $(stdlib-bench) $(malloc-bench)
>
> The ideal output here would be to have a single bench-malloc.out that
> has the number of threads as variants.

Yes, I think you're right. It would also be useful to have all the
data in one file when displaying the results graphically. I'll
refactor the code in that direction.

>>
>>  CFLAGS-bench-ffs.c += -fno-builtin
>>  CFLAGS-bench-ffsll.c += -fno-builtin
>> @@ -47,6 +49,9 @@ CFLAGS-bench-ffsll.c += -fno-builtin
>>  $(addprefix $(objpfx)bench-,$(bench-math)): $(common-objpfx)math/libm.so
>>  $(addprefix $(objpfx)bench-,$(bench-pthread)): \
>>       $(common-objpfx)nptl/libpthread.so
>> +$(objpfx)bench-malloc-threads-8: $(common-objpfx)nptl/libpthread.so
>> +$(objpfx)bench-malloc-threads-32: $(common-objpfx)nptl/libpthread.so
>> +$(objpfx)bench-malloc-threads-64: $(common-objpfx)nptl/libpthread.so
>>
>
> $(addprefix $(objpfx)bench-,$(malloc-bench)): $(common-objpfx)nptl/libpthread.so

Fixed.

>>
>>
>> diff --git a/benchtests/bench-malloc-threads-32.c b/benchtests/bench-malloc-threads-32.c
>> new file mode 100644
>> index 0000000..463ceb7
>> --- /dev/null
>> +++ b/benchtests/bench-malloc-threads-32.c
>> @@ -0,0 +1,20 @@
>> +/* Measure malloc and free functions with threads.
>> +   Copyright (C) 2014 Free Software Foundation, Inc.
>> +   This file is part of the GNU C Library.
>> +
>> +   The GNU C Library is free software; you can redistribute it and/or
>> +   modify it under the terms of the GNU Lesser General Public
>> +   License as published by the Free Software Foundation; either
>> +   version 2.1 of the License, or (at your option) any later version.
>> +
>> +   The GNU C Library is distributed in the hope that it will be useful,
>> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> +   Lesser General Public License for more details.
>> +
>> +   You should have received a copy of the GNU Lesser General Public
>> +   License along with the GNU C Library; if not, see
>> +   <http://www.gnu.org/licenses/>.  */
>> +
>> +#define NUM_THREADS 32
>> +#include "bench-malloc.c"
>> diff --git a/benchtests/bench-malloc-threads-64.c b/benchtests/bench-malloc-threads-64.c
>> new file mode 100644
>> index 0000000..61d8c10
>> --- /dev/null
>> +++ b/benchtests/bench-malloc-threads-64.c
>> @@ -0,0 +1,20 @@
>> +/* Measure malloc and free functions with threads.
>> +   Copyright (C) 2014 Free Software Foundation, Inc.
>> +   This file is part of the GNU C Library.
>> +
>> +   The GNU C Library is free software; you can redistribute it and/or
>> +   modify it under the terms of the GNU Lesser General Public
>> +   License as published by the Free Software Foundation; either
>> +   version 2.1 of the License, or (at your option) any later version.
>> +
>> +   The GNU C Library is distributed in the hope that it will be useful,
>> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> +   Lesser General Public License for more details.
>> +
>> +   You should have received a copy of the GNU Lesser General Public
>> +   License along with the GNU C Library; if not, see
>> +   <http://www.gnu.org/licenses/>.  */
>> +
>> +#define NUM_THREADS 64
>> +#include "bench-malloc.c"
>> diff --git a/benchtests/bench-malloc-threads-8.c b/benchtests/bench-malloc-threads-8.c
>> new file mode 100644
>> index 0000000..ac4ff79
>> --- /dev/null
>> +++ b/benchtests/bench-malloc-threads-8.c
>> @@ -0,0 +1,20 @@
>> +/* Measure malloc and free functions with threads.
>> +   Copyright (C) 2014 Free Software Foundation, Inc.
>> +   This file is part of the GNU C Library.
>> +
>> +   The GNU C Library is free software; you can redistribute it and/or
>> +   modify it under the terms of the GNU Lesser General Public
>> +   License as published by the Free Software Foundation; either
>> +   version 2.1 of the License, or (at your option) any later version.
>> +
>> +   The GNU C Library is distributed in the hope that it will be useful,
>> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> +   Lesser General Public License for more details.
>> +
>> +   You should have received a copy of the GNU Lesser General Public
>> +   License along with the GNU C Library; if not, see
>> +   <http://www.gnu.org/licenses/>.  */
>> +
>> +#define NUM_THREADS 8
>> +#include "bench-malloc.c"
>> diff --git a/benchtests/bench-malloc.c b/benchtests/bench-malloc.c
>> new file mode 100644
>> index 0000000..dc4fe17
>> --- /dev/null
>> +++ b/benchtests/bench-malloc.c
>> @@ -0,0 +1,236 @@
>> +/* Benchmark malloc and free functions.
>> +   Copyright (C) 2013-2014 Free Software Foundation, Inc.
>> +   This file is part of the GNU C Library.
>> +
>> +   The GNU C Library is free software; you can redistribute it and/or
>> +   modify it under the terms of the GNU Lesser General Public
>> +   License as published by the Free Software Foundation; either
>> +   version 2.1 of the License, or (at your option) any later version.
>> +
>> +   The GNU C Library is distributed in the hope that it will be useful,
>> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> +   Lesser General Public License for more details.
>> +
>> +   You should have received a copy of the GNU Lesser General Public
>> +   License along with the GNU C Library; if not, see
>> +   <http://www.gnu.org/licenses/>.  */
>> +
>> +#include <math.h>
>> +#include <pthread.h>
>> +#include <stdio.h>
>> +#include <stdlib.h>
>> +#include <sys/time.h>
>> +#include <sys/resource.h>
>> +
>> +#include "bench-timing.h"
>> +#include "json-lib.h"
>> +
>> +#define BENCHMARK_ITERATIONS 40000000
>> +#define RAND_SEED            88
>> +
>> +#ifndef NUM_THREADS
>> +#define NUM_THREADS 1
>
> # define...

Fixed.

>> +#endif
>> +
>> +/* Maximum memory that can be allocated at any one time is:
>> +
>> +   NUM_THREADS * WORKING_SET_SIZE * MAX_ALLOCATION_SIZE
>> +
>> +   However due to the distribution of the random block sizes
>> +   the typical amount allocated will be much smaller.  */
>> +#define WORKING_SET_SIZE     1024
>> +
>> +#define MIN_ALLOCATION_SIZE  4
>> +#define MAX_ALLOCATION_SIZE  32768
>
> A maximum of 32K only tests arena allocation performance.  This is
> fine for now since malloc+mmap performance is as interesting.  What is
> interesting though is the dynamic threshold management which brings in
> allocations into the arena for larger sizes and what kind of
> performance improvement it provides, but that is a different
> benchmark.

There's at least two axes we are interested in - how performance
scales with the number of threads and how performance scales with the
allocation size. For thread performance (which this benchmark is
about) the larger allocations are not so interesting - typically their
locking overhead is in the kernel rather than userland and in terms of
real world application performance its just not as likely to be a
bottleneck as small allocations. We have to be pragmatic in which
choices we make as the full matrix of threads versus allocation sizes
would be pretty huge.

So I guess I should probably also write a benchmark for allocation
size for glibc as well...

>> +
>> +/* Get a random block size with an inverse square distribution.  */
>> +static unsigned int
>> +get_block_size (unsigned int rand_data)
>> +{
>> +  /* Inverse square.  */
>> +  float exponent = -2;
>
> Mark as const.

Ok, although I don't believe it affects code generation.

>> +  /* Minimum value of distribution.  */
>> +  float dist_min = MIN_ALLOCATION_SIZE;
>> +  /* Maximum value of distribution.  */
>> +  float dist_max = MAX_ALLOCATION_SIZE;
>> +
>> +  float min_pow = powf (dist_min, exponent + 1);
>> +  float max_pow = powf (dist_max, exponent + 1);
>> +
>> +  float r = (float) rand_data / RAND_MAX;
>> +
>> +  return (unsigned int) powf ((max_pow - min_pow) * r + min_pow, 1 / (exponent + 1));
>> +}
>> +
>> +#define NUM_BLOCK_SIZES      8000
>> +#define NUM_OFFSETS  ((WORKING_SET_SIZE) * 4)
>> +
>> +static unsigned int random_block_sizes[NUM_BLOCK_SIZES];
>> +static unsigned int random_offsets[NUM_OFFSETS];
>> +
>> +static void
>> +init_random_values (void)
>> +{
>> +  size_t i;
>> +
>> +  for (i = 0; i < NUM_BLOCK_SIZES; i++)
>
> You can collapse this to:
>
>   for (size_t i = 0; i < NUM_BLOCK_SIZES; i++)
>
>> +    random_block_sizes[i] = get_block_size (rand ());
>> +
>> +  for (i = 0; i < NUM_OFFSETS; i++)
>
> Likewise.
>
>> +    random_offsets[i] = rand () % WORKING_SET_SIZE;
>> +}
>> +
>> +static unsigned int
>> +get_random_block_size (unsigned int *state)
>> +{
>> +  unsigned int idx = *state;
>> +
>> +  if (idx >= NUM_BLOCK_SIZES - 1)
>> +    idx = 0;
>> +  else
>> +    idx++;
>> +
>> +  *state = idx;
>> +
>> +  return random_block_sizes[idx];
>> +}
>> +
>> +static unsigned int
>> +get_random_offset (unsigned int *state)
>> +{
>> +  unsigned int idx = *state;
>> +
>> +  if (idx >= NUM_OFFSETS - 1)
>> +    idx = 0;
>> +  else
>> +    idx++;
>> +
>> +  *state = idx;
>> +
>> +  return random_offsets[idx];
>> +}
>> +
>> +/* Allocate and free blocks in a random order.  */
>> +static void
>> +malloc_benchmark_loop (size_t iters, void **ptr_arr)
>> +{
>> +  size_t i;
>> +  unsigned int offset_state = 0, block_state = 0;
>> +
>> +  for (i = 0; i < iters; i++)
>
> You can collapse this to:
>
>   for (size_t i = 0; i < iters; i++)

Done.

>> +    {
>> +      unsigned int next_idx = get_random_offset (&offset_state);
>> +      unsigned int next_block = get_random_block_size (&block_state);
>> +
>> +      free (ptr_arr[next_idx]);
>> +
>> +      ptr_arr[next_idx] = malloc (next_block);
>> +    }
>> +}
>> +
>> +static void *working_set[NUM_THREADS][WORKING_SET_SIZE];
>> +
>> +#if NUM_THREADS > 1
>> +static pthread_t threads[NUM_THREADS];
>> +
>> +struct thread_args
>> +{
>> +  size_t iters;
>> +  void **working_set;
>> +};
>> +
>> +static void *
>> +benchmark_thread (void *arg)
>> +{
>> +  struct thread_args *args = (struct thread_args *) arg;
>> +  size_t iters = args->iters;
>> +  void *thread_set = args->working_set;
>> +
>> +  malloc_benchmark_loop (iters, thread_set);
>> +
>> +  return NULL;
>> +}
>> +#endif
>> +
>> +static void
>> +do_benchmark (size_t iters)
>> +{
>> +#if NUM_THREADS == 1
>> +  malloc_benchmark_loop (iters, working_set[0]);
>> +#else
>> +  struct thread_args args[NUM_THREADS];
>> +
>> +  size_t i;
>> +
>> +  for (i = 0; i < NUM_THREADS; i++)
>> +    {
>> +      args[i].iters = iters;
>> +      args[i].working_set = working_set[i];
>> +      pthread_create(&threads[i], NULL, benchmark_thread, &args[i]);
>> +    }
>> +
>> +  for (i = 0; i < NUM_THREADS; i++)
>> +    pthread_join(threads[i], NULL);
>> +#endif
>> +}
>> +
>> +int
>> +main (int argc, char **argv)
>> +{
>> +  timing_t start, stop, cur;
>> +  size_t iters = BENCHMARK_ITERATIONS;
>> +  unsigned long res;
>> +  json_ctx_t json_ctx;
>> +  double d_total_s, d_total_i;
>> +
>> +  init_random_values ();
>> +
>> +  json_init (&json_ctx, 0, stdout);
>> +
>> +  json_document_begin (&json_ctx);
>> +
>> +  json_attr_string (&json_ctx, "timing_type", TIMING_TYPE);
>> +
>> +  json_attr_object_begin (&json_ctx, "functions");
>> +
>> +  json_attr_object_begin (&json_ctx, "malloc");
>> +
>> +  json_attr_object_begin (&json_ctx, "");
>> +
>> +  TIMING_INIT (res);
>> +
>> +  (void) res;
>> +
>> +  TIMING_NOW (start);
>> +  do_benchmark (iters);
>> +  TIMING_NOW (stop);
>> +
>> +  struct rusage usage;
>> +  getrusage(RUSAGE_SELF, &usage);
>> +
>> +  TIMING_DIFF (cur, start, stop);
>> +
>> +  d_total_s = cur;
>> +  d_total_i = iters * NUM_THREADS;
>> +
>> +  json_attr_double (&json_ctx, "duration", d_total_s);
>> +  json_attr_double (&json_ctx, "iterations", d_total_i);
>> +  json_attr_double (&json_ctx, "mean", d_total_s / d_total_i);
>> +  json_attr_double (&json_ctx, "max_rss", usage.ru_maxrss);
>
> I don't know how useful max_rss would be since we're only doing a
> malloc and never really writing anything to the allocated memory.
> Smaller sizes may probably result in actual page allocation since we
> write to the chunk headers, but probably not so for larger sizes.

Yes, it is slightly problematic. What you probably want to to do is
zero all the memory and measure RSS at that point but it would slow
down the benchmark and spend lots of time in memset instead. At the
moment it tells you how many pages are taken up by book-keeping but
not how many of those pages your application would touch anyway.

> Overcommit status of the system on which the benchmark was run would
> also be a useful thing to know here because the memory reclamation for
> non-main arenas is different when overcommit_memory is set to 2 and
> that could have performance implications.  That would be
> Linux-specific though, so I'm not sure how to accommodate it here.  It
> could be done as a separate change I guess.

I'll have a think about that...

>> +
>> +  json_attr_double (&json_ctx, "threads", NUM_THREADS);
>> +  json_attr_double (&json_ctx, "min_size", MIN_ALLOCATION_SIZE);
>> +  json_attr_double (&json_ctx, "max_size", MAX_ALLOCATION_SIZE);
>> +  json_attr_double (&json_ctx, "random_seed", RAND_SEED);
>> +
>> +  json_attr_object_end (&json_ctx);
>> +
>> +  json_attr_object_end (&json_ctx);
>> +
>> +  json_attr_object_end (&json_ctx);
>> +
>> +  json_document_end (&json_ctx);
>> +
>> +  return 0;
>> +}
>> --
>> 1.8.1.4
>>
>
> This looks good to me barring a few nits I mentioned above.  Have you
> seen if the non-main arena needs to extend/reduce itself with the
> number of iterations and working set you have defined?  That is
> another overhead since there are a few mprotect/mmap calls happening
> there that could be expensive.

No I haven't looked into that, so far I have been treating malloc as a
black box and I'm hoping not to tailor teh benchmark too far to one
implementation or another.

I'll rework the patches and hopefully get a graphing script to go with it...

-- 
Will Newton
Toolchain Working Group, Linaro


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]