2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain. Use, modify, and
4 redistribute this code without permission or acknowledgement in any
5 way you wish. Send questions, comments, complaints, performance
6 data, etc to dl@cs.oswego.edu
8 * VERSION 2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
10 Note: There may be an updated version of this malloc obtainable at
11 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
12 Check before installing!
16 This library is all in one file to simplify the most common usage:
17 ftp it, compile it (-O), and link it into another program. All
18 of the compile-time options default to reasonable values for use on
19 most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
20 You might later want to step through various compile-time and dynamic
23 For convenience, an include file for code using this malloc is at:
24 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.0.h
25 You don't really need this .h file unless you call functions not
26 defined in your system include files. The .h file contains only the
27 excerpts from this file needed for using this malloc on ANSI C/C++
28 systems, so long as you haven't changed compile-time options about
29 naming and tuning parameters. If you do, then you can create your
30 own malloc.h that does include all settings by cutting at the point
33 * Why use this malloc?
35 This is not the fastest, most space-conserving, most portable, or
36 most tunable malloc ever written. However it is among the fastest
37 while also being among the most space-conserving, portable and tunable.
38 Consistent balance across these factors results in a good general-purpose
39 allocator for malloc-intensive programs.
41 The main properties of the algorithms are:
42 * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
43 with ties normally decided via FIFO (i.e. least recently used).
44 * For small (<= 64 bytes by default) requests, it is a caching
45 allocator, that maintains pools of quickly recycled chunks.
46 * In between, and for combinations of large and small requests, it does
47 the best it can trying to meet both goals at once.
48 * For very large requests (>= 128KB by default), it relies on system
49 memory mapping facilities, if supported.
51 For a longer but slightly out of date high-level description, see
52 http://gee.cs.oswego.edu/dl/html/malloc.html
54 You may already by default be using a C library containing a malloc
55 that is based on some version of this malloc (for example in
56 linux). You might still want to use the one in this file in order to
57 customize settings or to avoid overheads associated with library
60 * Contents, described in more detail in "description of public routines" below.
62 Standard (ANSI/SVID/...) functions:
64 calloc(size_t n_elements, size_t element_size);
66 realloc(Void_t* p, size_t n);
67 memalign(size_t alignment, size_t n);
70 mallopt(int parameter_number, int parameter_value)
73 independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]);
74 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
77 malloc_trim(size_t pad);
78 malloc_usable_size(Void_t* p);
83 Supported pointer representation: 4 or 8 bytes
84 Supported size_t representation: 4 or 8 bytes
85 Note that size_t is allowed to be 4 bytes even if pointers are 8.
86 You can adjust this by defining INTERNAL_SIZE_T
88 Alignment: 2 * sizeof(size_t) (default)
89 (i.e., 8 byte alignment with 4byte size_t). This suffices for
90 nearly all current machines and C compilers. However, you can
91 define MALLOC_ALIGNMENT to be wider than this if necessary.
93 Minimum overhead per allocated chunk: 4 or 8 bytes
94 Each malloced chunk has a hidden word of overhead holding size
95 and status information.
97 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
98 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
100 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
101 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
102 needed; 4 (8) for a trailing size field and 8 (16) bytes for
103 free list pointers. Thus, the minimum allocatable size is
106 Even a request for zero bytes (i.e., malloc(0)) returns a
107 pointer to something of the minimum allocatable size.
109 The maximum overhead wastage (i.e., number of extra bytes
110 allocated than were requested in malloc) is less than or equal
111 to the minimum size, except for requests >= mmap_threshold that
112 are serviced via mmap(), where the worst case wastage is 2 *
113 sizeof(size_t) bytes plus the remainder from a system page (the
114 minimal mmap unit); typically 4096 or 8192 bytes.
116 Maximum allocated size: 4-byte size_t: 2^32 minus about two pages
117 8-byte size_t: 2^64 minus about two pages
119 It is assumed that (possibly signed) size_t values suffice to
120 represent chunk sizes. `Possibly signed' is due to the fact
121 that `size_t' may be defined on a system as either a signed or
122 an unsigned type. The ISO C standard says that it must be
123 unsigned, but a few systems are known not to adhere to this.
124 Additionally, even when size_t is unsigned, sbrk (which is by
125 default used to obtain memory from system) accepts signed
126 arguments, and may not be able to handle size_t-wide arguments
127 with negative sign bit. Generally, values that would
128 appear as negative after accounting for overhead and alignment
129 are supported only via mmap(), which does not have this
132 Requests for sizes outside the allowed range will perform an optional
133 failure action and then return null. (Requests may also
134 also fail because a system is out of memory.)
136 Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
138 When USE_MALLOC_LOCK is defined, wrappers are created to
139 surround every public call with either a pthread mutex or
140 a win32 spinlock (depending on WIN32). This is not
141 especially fast, and can be a major bottleneck.
142 It is designed only to provide minimal protection
143 in concurrent environments, and to provide a basis for
144 extensions. If you are using malloc in a concurrent program,
145 you would be far better off obtaining ptmalloc, which is
146 derived from a version of this malloc, and is well-tuned for
147 concurrent programs. (See http://www.malloc.de)
149 Compliance: I believe it is compliant with the 1997 Single Unix Specification
150 (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably
153 * Synopsis of compile-time options:
155 People have reported using previous versions of this malloc on all
156 versions of Unix, sometimes by tweaking some of the defines
157 below. It has been tested most extensively on Solaris and
158 Linux. It is also reported to work on WIN32 platforms.
159 People also report using it in stand-alone embedded systems.
161 The implementation is in straight, hand-tuned ANSI C. It is not
162 at all modular. (Sorry!) It uses a lot of macros. To be at all
163 usable, this code should be compiled using an optimizing compiler
164 (for example gcc -O3) that can simplify expressions and control
165 paths. (FAQ: some macros import variables as arguments rather than
166 declare locals because people reported that some debuggers
167 otherwise get confused.)
171 Compilation Environment options:
173 __STD_C derived from C compiler defines
176 USE_MEMCPY 1 if HAVE_MEMCPY is defined
177 HAVE_MMAP defined as 1
179 HAVE_MREMAP 0 unless linux defined
180 malloc_getpagesize derived from system #includes, or 4096 if not
181 HAVE_USR_INCLUDE_MALLOC_H NOT defined
182 LACKS_UNISTD_H NOT defined unless WIN32
183 LACKS_SYS_PARAM_H NOT defined unless WIN32
184 LACKS_SYS_MMAN_H NOT defined unless WIN32
186 Changing default word sizes:
188 INTERNAL_SIZE_T size_t
189 MALLOC_ALIGNMENT 2 * sizeof(INTERNAL_SIZE_T)
191 Configuration and functionality options:
193 USE_DL_PREFIX NOT defined
194 USE_PUBLIC_MALLOC_WRAPPERS NOT defined
195 USE_MALLOC_LOCK NOT defined
197 REALLOC_ZERO_BYTES_FREES NOT defined
198 MALLOC_FAILURE_ACTION errno = ENOMEM, if __STD_C defined, else no-op
201 Options for customizing MORECORE:
204 MORECORE_CONTIGUOUS 1
205 MORECORE_CANNOT_TRIM NOT defined
206 MMAP_AS_MORECORE_SIZE (1024 * 1024)
208 Tuning options that are also dynamically changeable via mallopt:
211 DEFAULT_TRIM_THRESHOLD 128 * 1024
213 DEFAULT_MMAP_THRESHOLD 128 * 1024
214 DEFAULT_MMAP_MAX 65536
216 There are several other #defined constants and macros that you
217 probably don't want to touch unless you are extending or adapting malloc.
221 WIN32 sets up defaults for MS environment and compilers.
222 Otherwise defaults are for unix.
229 #define WIN32_LEAN_AND_MEAN
232 /* Win32 doesn't supply or need the following headers */
233 #define LACKS_UNISTD_H
234 #define LACKS_SYS_PARAM_H
235 #define LACKS_SYS_MMAN_H
237 /* Use the supplied emulation of sbrk */
238 #define MORECORE sbrk
239 #define MORECORE_CONTIGUOUS 0
240 #define MORECORE_FAILURE ((void*)(-1))
242 /* Use the supplied emulation of mmap and munmap */
244 #define MUNMAP_FAILURE (-1)
245 #define MMAP_CLEARS 1
247 /* These values don't really matter in windows mmap emulation */
248 #define MAP_PRIVATE 1
249 #define MAP_ANONYMOUS 2
253 /* Emulation functions defined at the end of this file */
255 /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
256 #ifdef USE_MALLOC_LOCK
257 static int slwait(int *sl
);
258 static int slrelease(int *sl
);
261 static long getpagesize(void);
262 static long getregionsize(void);
263 static void *sbrk(long size
);
265 static void *mmap(void *ptr
, long size
, long prot
, long type
, long handle
, long arg
);
266 static long munmap(void *ptr
, long size
);
269 static void vminfo (unsigned long *free
, unsigned long *reserved
, unsigned long *committed
);
270 static int cpuinfo (int whole
, unsigned long *kernel
, unsigned long *user
);
275 __STD_C should be nonzero if using ANSI-standard C compiler, a C++
276 compiler, or a C compiler sufficiently close to ANSI to get away
281 #if defined(__STDC__) || defined(_cplusplus)
290 Void_t* is the pointer type that malloc should say it returns
294 #if (__STD_C || defined(WIN32))
302 #include <stddef.h> /* for size_t */
304 #include <sys/types.h>
311 /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
313 /* #define LACKS_UNISTD_H */
315 #ifndef LACKS_UNISTD_H
319 /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
321 /* #define LACKS_SYS_PARAM_H */
324 #include <stdio.h> /* needed for malloc_stats */
325 #include <errno.h> /* needed for optional MALLOC_FAILURE_ACTION */
331 Because freed chunks may be overwritten with bookkeeping fields, this
332 malloc will often die when freed memory is overwritten by user
333 programs. This can be very effective (albeit in an annoying way)
334 in helping track down dangling pointers.
336 If you compile with -DDEBUG, a number of assertion checks are
337 enabled that will catch more memory errors. You probably won't be
338 able to make much sense of the actual assertion errors, but they
339 should help you locate incorrectly overwritten memory. The
340 checking is fairly extensive, and will slow down execution
341 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
342 attempt to check every non-mmapped allocated and free chunk in the
343 course of computing the summmaries. (By nature, mmapped regions
344 cannot be checked very much automatically.)
346 Setting DEBUG may also be helpful if you are trying to modify
347 this code. The assertions in the check routines spell out in more
348 detail the assumptions and invariants underlying the algorithms.
350 Setting DEBUG does NOT provide an automated mechanism for checking
351 that all accesses to malloced memory stay within their
352 bounds. However, there are several add-ons and adaptations of this
353 or other mallocs available that do this.
359 #define assert(x) ((void)0)
364 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
367 The default version is the same as size_t.
369 While not strictly necessary, it is best to define this as an
370 unsigned type, even if size_t is a signed type. This may avoid some
371 artificial size limitations on some systems.
373 On a 64-bit machine, you may be able to reduce malloc overhead by
374 defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
375 expense of not being able to handle more than 2^32 of malloced
376 space. If this limitation is acceptable, you are encouraged to set
377 this unless you are on a platform requiring 16byte alignments. In
378 this case the alignment requirements turn out to negate any
379 potential advantages of decreasing size_t word size.
381 Implementors: Beware of the possible combinations of:
382 - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
383 and might be the same width as int or as long
384 - size_t might have different width and signedness as INTERNAL_SIZE_T
385 - int and long might be 32 or 64 bits, and might be the same width
386 To deal with this, most comparisons and difference computations
387 among INTERNAL_SIZE_Ts should cast them to unsigned long, being
388 aware of the fact that casting an unsigned int to a wider long does
389 not sign-extend. (This also makes checking for negative numbers
390 awkward.) Some of these casts result in harmless compiler warnings
394 #ifndef INTERNAL_SIZE_T
395 #define INTERNAL_SIZE_T size_t
398 /* The corresponding word size */
399 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
403 MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
404 It must be a power of two at least 2 * SIZE_SZ, even on machines
405 for which smaller alignments would suffice. It may be defined as
406 larger than this though. Note however that code and data structures
407 are optimized for the case of 8-byte alignment.
411 #ifndef MALLOC_ALIGNMENT
412 #define MALLOC_ALIGNMENT (2 * SIZE_SZ)
415 /* The corresponding bit mask value */
416 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
421 REALLOC_ZERO_BYTES_FREES should be set if a call to
422 realloc with zero bytes should be the same as a call to free.
423 Some people think it should. Otherwise, since this malloc
424 returns a unique pointer for malloc(0), so does realloc(p, 0).
427 /* #define REALLOC_ZERO_BYTES_FREES */
430 TRIM_FASTBINS controls whether free() of a very small chunk can
431 immediately lead to trimming. Setting to true (1) can reduce memory
432 footprint, but will almost always slow down programs that use a lot
435 Define this only if you are willing to give up some speed to more
436 aggressively reduce system-level memory footprint when releasing
437 memory in programs that use many small chunks. You can get
438 essentially the same effect by setting MXFAST to 0, but this can
439 lead to even greater slowdowns in programs using many small chunks.
440 TRIM_FASTBINS is an in-between compile-time option, that disables
441 only those chunks bordering topmost memory from being placed in
445 #ifndef TRIM_FASTBINS
446 #define TRIM_FASTBINS 0
451 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
452 This is necessary when you only want to use this malloc in one part
453 of a program, using your regular system malloc elsewhere.
456 /* #define USE_DL_PREFIX */
460 USE_MALLOC_LOCK causes wrapper functions to surround each
461 callable routine with pthread mutex lock/unlock.
463 USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
467 /* #define USE_MALLOC_LOCK */
471 If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
472 actually a wrapper function that first calls MALLOC_PREACTION, then
473 calls the internal routine, and follows it with
474 MALLOC_POSTACTION. This is needed for locking, but you can also use
475 this, without USE_MALLOC_LOCK, for purposes of interception,
476 instrumentation, etc. It is a sad fact that using wrappers often
477 noticeably degrades performance of malloc-intensive programs.
480 #ifdef USE_MALLOC_LOCK
481 #define USE_PUBLIC_MALLOC_WRAPPERS
483 /* #define USE_PUBLIC_MALLOC_WRAPPERS */
488 Two-phase name translation.
489 All of the actual routines are given mangled names.
490 When wrappers are used, they become the public callable versions.
491 When DL_PREFIX is used, the callable names are prefixed.
494 #ifndef USE_PUBLIC_MALLOC_WRAPPERS
495 #define cALLOc public_cALLOc
496 #define fREe public_fREe
497 #define cFREe public_cFREe
498 #define mALLOc public_mALLOc
499 #define mEMALIGn public_mEMALIGn
500 #define rEALLOc public_rEALLOc
501 #define vALLOc public_vALLOc
502 #define pVALLOc public_pVALLOc
503 #define mALLINFo public_mALLINFo
504 #define mALLOPt public_mALLOPt
505 #define mTRIm public_mTRIm
506 #define mSTATs public_mSTATs
507 #define mUSABLe public_mUSABLe
508 #define iCALLOc public_iCALLOc
509 #define iCOMALLOc public_iCOMALLOc
513 #define public_cALLOc dlcalloc
514 #define public_fREe dlfree
515 #define public_cFREe dlcfree
516 #define public_mALLOc dlmalloc
517 #define public_mEMALIGn dlmemalign
518 #define public_rEALLOc dlrealloc
519 #define public_vALLOc dlvalloc
520 #define public_pVALLOc dlpvalloc
521 #define public_mALLINFo dlmallinfo
522 #define public_mALLOPt dlmallopt
523 #define public_mTRIm dlmalloc_trim
524 #define public_mSTATs dlmalloc_stats
525 #define public_mUSABLe dlmalloc_usable_size
526 #define public_iCALLOc dlindependent_calloc
527 #define public_iCOMALLOc dlindependent_comalloc
528 #else /* USE_DL_PREFIX */
529 #define public_cALLOc calloc
530 #define public_fREe free
531 #define public_cFREe cfree
532 #define public_mALLOc malloc
533 #define public_mEMALIGn memalign
534 #define public_rEALLOc realloc
535 #define public_vALLOc valloc
536 #define public_pVALLOc pvalloc
537 #define public_mALLINFo mallinfo
538 #define public_mALLOPt mallopt
539 #define public_mTRIm malloc_trim
540 #define public_mSTATs malloc_stats
541 #define public_mUSABLe malloc_usable_size
542 #define public_iCALLOc independent_calloc
543 #define public_iCOMALLOc independent_comalloc
544 #endif /* USE_DL_PREFIX */
548 HAVE_MEMCPY should be defined if you are not otherwise using
549 ANSI STD C, but still have memcpy and memset in your C library
550 and want to use them in calloc and realloc. Otherwise simple
551 macro versions are defined below.
553 USE_MEMCPY should be defined as 1 if you actually want to
554 have memset and memcpy called. People report that the macro
555 versions are faster than libc versions on some systems.
557 Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
558 (of <= 36 bytes) are manually unrolled in realloc and calloc.
572 #if (__STD_C || defined(HAVE_MEMCPY))
575 /* On Win32 memset and memcpy are already declared in windows.h */
578 void* memset(void*, int, size_t);
579 void* memcpy(void*, const void*, size_t);
588 MALLOC_FAILURE_ACTION is the action to take before "return 0" when
589 malloc fails to be able to return memory, either because memory is
590 exhausted or because of illegal arguments.
592 By default, sets errno if running on STD_C platform, else does nothing.
595 #ifndef MALLOC_FAILURE_ACTION
597 #define MALLOC_FAILURE_ACTION \
601 #define MALLOC_FAILURE_ACTION
606 MORECORE-related declarations. By default, rely on sbrk
610 #ifdef LACKS_UNISTD_H
611 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) && !defined(WIN32)
613 extern Void_t
* sbrk(ptrdiff_t);
615 extern Void_t
* sbrk();
621 MORECORE is the name of the routine to call to obtain more memory
622 from the system. See below for general guidance on writing
623 alternative MORECORE functions, as well as a version for WIN32 and a
624 sample version for pre-OSX macos.
628 #define MORECORE sbrk
632 MORECORE_FAILURE is the value returned upon failure of MORECORE
633 as well as mmap. Since it cannot be an otherwise valid memory address,
634 and must reflect values of standard sys calls, you probably ought not
638 #ifndef MORECORE_FAILURE
639 #define MORECORE_FAILURE (-1)
643 If MORECORE_CONTIGUOUS is true, take advantage of fact that
644 consecutive calls to MORECORE with positive arguments always return
645 contiguous increasing addresses. This is true of unix sbrk. Even
646 if not defined, when regions happen to be contiguous, malloc will
647 permit allocations spanning regions obtained from different
648 calls. But defining this when applicable enables some stronger
649 consistency checks and space efficiencies.
652 #ifndef MORECORE_CONTIGUOUS
653 #define MORECORE_CONTIGUOUS 1
657 Define MORECORE_CANNOT_TRIM if your version of MORECORE
658 cannot release space back to the system when given negative
659 arguments. This is generally necessary only if you are using
660 a hand-crafted MORECORE function that cannot handle negative arguments.
663 /* #define MORECORE_CANNOT_TRIM */
667 Define HAVE_MMAP as true to optionally make malloc() use mmap() to
668 allocate very large blocks. These will be returned to the
669 operating system immediately after a free(). Also, if mmap
670 is available, it is used as a backup strategy in cases where
671 MORECORE fails to provide space from system.
673 This malloc is best tuned to work with mmap for large requests.
674 If you do not have mmap, operations involving very large chunks (1MB
675 or so) may be slower than you'd like.
682 Standard unix mmap using /dev/zero clears memory so calloc doesn't
687 #define MMAP_CLEARS 1
692 #define MMAP_CLEARS 0
698 MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
699 sbrk fails, and mmap is used as a backup (which is done only if
700 HAVE_MMAP). The value must be a multiple of page size. This
701 backup strategy generally applies only when systems have "holes" in
702 address space, so sbrk cannot perform contiguous expansion, but
703 there is still space available on system. On systems for which
704 this is known to be useful (i.e. most linux kernels), this occurs
705 only when programs allocate huge amounts of memory. Between this,
706 and the fact that mmap regions tend to be limited, the size should
707 be large, to avoid too many mmap calls and thus avoid running out
711 #ifndef MMAP_AS_MORECORE_SIZE
712 #define MMAP_AS_MORECORE_SIZE (1024 * 1024)
716 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
717 large blocks. This is currently only possible on Linux with
718 kernel versions newer than 1.3.77.
723 #define HAVE_MREMAP 1
725 #define HAVE_MREMAP 0
728 #endif /* HAVE_MMAP */
732 The system page size. To the extent possible, this malloc manages
733 memory from the system in page-size units. Note that this value is
734 cached during initialization into a field of malloc_state. So even
735 if malloc_getpagesize is a function, it is only called once.
737 The following mechanics for getpagesize were adapted from bsd/gnu
738 getpagesize.h. If none of the system-probes here apply, a value of
739 4096 is used, which should be OK: If they don't apply, then using
740 the actual value probably doesn't impact performance.
744 #ifndef malloc_getpagesize
746 #ifndef LACKS_UNISTD_H
750 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
751 # ifndef _SC_PAGE_SIZE
752 # define _SC_PAGE_SIZE _SC_PAGESIZE
756 # ifdef _SC_PAGE_SIZE
757 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
759 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
760 extern size_t getpagesize();
761 # define malloc_getpagesize getpagesize()
763 # ifdef WIN32 /* use supplied emulation of getpagesize */
764 # define malloc_getpagesize getpagesize()
766 # ifndef LACKS_SYS_PARAM_H
767 # include <sys/param.h>
769 # ifdef EXEC_PAGESIZE
770 # define malloc_getpagesize EXEC_PAGESIZE
774 # define malloc_getpagesize NBPG
776 # define malloc_getpagesize (NBPG * CLSIZE)
780 # define malloc_getpagesize NBPC
783 # define malloc_getpagesize PAGESIZE
784 # else /* just guess */
785 # define malloc_getpagesize (4096)
796 This version of malloc supports the standard SVID/XPG mallinfo
797 routine that returns a struct containing usage properties and
798 statistics. It should work on any SVID/XPG compliant system that has
799 a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
800 install such a thing yourself, cut out the preliminary declarations
801 as described above and below and save them in a malloc.h file. But
802 there's no compelling reason to bother to do this.)
804 The main declaration needed is the mallinfo struct that is returned
805 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
806 bunch of field that are not even meaningful in this version of
807 malloc. These fields are are instead filled by mallinfo() with
808 other numbers that might be of interest.
810 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
811 /usr/include/malloc.h file that includes a declaration of struct
812 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
813 version is declared below. These must be precisely the same for
814 mallinfo() to work. The original SVID version of this struct,
815 defined on most systems with mallinfo, declares all fields as
816 ints. But some others define as unsigned long. If your system
817 defines the fields using a type of different width than listed here,
818 you must #include your system version and #define
819 HAVE_USR_INCLUDE_MALLOC_H.
822 /* #define HAVE_USR_INCLUDE_MALLOC_H */
824 #ifdef HAVE_USR_INCLUDE_MALLOC_H
825 #include "/usr/include/malloc.h"
828 /* SVID2/XPG mallinfo structure */
831 int arena
; /* non-mmapped space allocated from system */
832 int ordblks
; /* number of free chunks */
833 int smblks
; /* number of fastbin blocks */
834 int hblks
; /* number of mmapped regions */
835 int hblkhd
; /* space in mmapped regions */
836 int usmblks
; /* maximum total allocated space */
837 int fsmblks
; /* space available in freed fastbin blocks */
838 int uordblks
; /* total allocated space */
839 int fordblks
; /* total free space */
840 int keepcost
; /* top-most, releasable (via malloc_trim) space */
844 SVID/XPG defines four standard parameter numbers for mallopt,
845 normally defined in malloc.h. Only one of these (M_MXFAST) is used
846 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
847 so setting them has no effect. But this malloc also supports other
848 options in mallopt described below.
853 /* ---------- description of public routines ------------ */
857 Returns a pointer to a newly allocated chunk of at least n bytes, or null
858 if no space is available. Additionally, on failure, errno is
859 set to ENOMEM on ANSI C systems.
861 If n is zero, malloc returns a minumum-sized chunk. (The minimum
862 size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
863 systems.) On most systems, size_t is an unsigned type, so calls
864 with negative arguments are interpreted as requests for huge amounts
865 of space, which will often fail. The maximum supported value of n
866 differs across systems, but is in all cases less than the maximum
867 representable value of a size_t.
870 Void_t
* public_mALLOc(size_t);
872 Void_t
* public_mALLOc();
877 Releases the chunk of memory pointed to by p, that had been previously
878 allocated using malloc or a related routine such as realloc.
879 It has no effect if p is null. It can have arbitrary (i.e., bad!)
880 effects if p has already been freed.
882 Unless disabled (using mallopt), freeing very large spaces will
883 when possible, automatically trigger operations that give
884 back unused memory to the system, thus reducing program footprint.
887 void public_fREe(Void_t
*);
893 calloc(size_t n_elements, size_t element_size);
894 Returns a pointer to n_elements * element_size bytes, with all locations
898 Void_t
* public_cALLOc(size_t, size_t);
900 Void_t
* public_cALLOc();
904 realloc(Void_t* p, size_t n)
905 Returns a pointer to a chunk of size n that contains the same data
906 as does chunk p up to the minimum of (n, p's size) bytes, or null
907 if no space is available.
909 The returned pointer may or may not be the same as p. The algorithm
910 prefers extending p when possible, otherwise it employs the
911 equivalent of a malloc-copy-free sequence.
913 If p is null, realloc is equivalent to malloc.
915 If space is not available, realloc returns null, errno is set (if on
916 ANSI) and p is NOT freed.
918 if n is for fewer bytes than already held by p, the newly unused
919 space is lopped off and freed if possible. Unless the #define
920 REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
921 zero (re)allocates a minimum-sized chunk.
923 Large chunks that were internally obtained via mmap will always
924 be reallocated using malloc-copy-free sequences unless
925 the system supports MREMAP (currently only linux).
927 The old unix realloc convention of allowing the last-free'd chunk
928 to be used as an argument to realloc is not supported.
931 Void_t
* public_rEALLOc(Void_t
*, size_t);
933 Void_t
* public_rEALLOc();
937 memalign(size_t alignment, size_t n);
938 Returns a pointer to a newly allocated chunk of n bytes, aligned
939 in accord with the alignment argument.
941 The alignment argument should be a power of two. If the argument is
942 not a power of two, the nearest greater power is used.
943 8-byte alignment is guaranteed by normal malloc calls, so don't
944 bother calling memalign with an argument of 8 or less.
946 Overreliance on memalign is a sure way to fragment space.
949 Void_t
* public_mEMALIGn(size_t, size_t);
951 Void_t
* public_mEMALIGn();
956 Equivalent to memalign(pagesize, n), where pagesize is the page
957 size of the system. If the pagesize is unknown, 4096 is used.
960 Void_t
* public_vALLOc(size_t);
962 Void_t
* public_vALLOc();
968 mallopt(int parameter_number, int parameter_value)
969 Sets tunable parameters The format is to provide a
970 (parameter-number, parameter-value) pair. mallopt then sets the
971 corresponding parameter to the argument value if it can (i.e., so
972 long as the value is meaningful), and returns 1 if successful else
973 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
974 normally defined in malloc.h. Only one of these (M_MXFAST) is used
975 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
976 so setting them has no effect. But this malloc also supports four
977 other options in mallopt. See below for details. Briefly, supported
978 parameters are as follows (listed defaults are for "typical"
981 Symbol param # default allowed param values
982 M_MXFAST 1 64 0-80 (0 disables fastbins)
983 M_TRIM_THRESHOLD -1 128*1024 any (-1U disables trimming)
985 M_MMAP_THRESHOLD -3 128*1024 any (or 0 if no MMAP support)
986 M_MMAP_MAX -4 65536 any (0 disables use of mmap)
989 int public_mALLOPt(int, int);
991 int public_mALLOPt();
997 Returns (by copy) a struct containing various summary statistics:
999 arena: current total non-mmapped bytes allocated from system
1000 ordblks: the number of free chunks
1001 smblks: the number of fastbin blocks (i.e., small chunks that
1002 have been freed but not use resused or consolidated)
1003 hblks: current number of mmapped regions
1004 hblkhd: total bytes held in mmapped regions
1005 usmblks: the maximum total allocated space. This will be greater
1006 than current total if trimming has occurred.
1007 fsmblks: total bytes held in fastbin blocks
1008 uordblks: current total allocated space (normal or mmapped)
1009 fordblks: total free space
1010 keepcost: the maximum number of bytes that could ideally be released
1011 back to system via malloc_trim. ("ideally" means that
1012 it ignores page restrictions etc.)
1014 Because these fields are ints, but internal bookkeeping may
1015 be kept as longs, the reported values may wrap around zero and
1019 struct mallinfo
public_mALLINFo(void);
1021 struct mallinfo
public_mALLINFo();
1025 independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]);
1027 independent_calloc is similar to calloc, but instead of returning a
1028 single cleared space, it returns an array of pointers to n_elements
1029 independent elements that can hold contents of size elem_size, each
1030 of which starts out cleared, and can be independently freed,
1031 realloc'ed etc. The elements are guaranteed to be adjacently
1032 allocated (this is not guaranteed to occur with multiple callocs or
1033 mallocs), which may also improve cache locality in some
1036 The "chunks" argument is optional (i.e., may be null, which is
1037 probably the most typical usage). If it is null, the returned array
1038 is itself dynamically allocated and should also be freed when it is
1039 no longer needed. Otherwise, the chunks array must be of at least
1040 n_elements in length. It is filled in with the pointers to the
1043 In either case, independent_calloc returns this pointer array, or
1044 null if the allocation failed. If n_elements is zero and "chunks"
1045 is null, it returns a chunk representing an array with zero elements
1046 (which should be freed if not wanted).
1048 Each element must be individually freed when it is no longer
1049 needed. If you'd like to instead be able to free all at once, you
1050 should instead use regular calloc and assign pointers into this
1051 space to represent elements. (In this case though, you cannot
1052 independently free elements.)
1054 independent_calloc simplifies and speeds up implementations of many
1055 kinds of pools. It may also be useful when constructing large data
1056 structures that initially have a fixed number of fixed-sized nodes,
1057 but the number is not known at compile time, and some of the nodes
1058 may later need to be freed. For example:
1060 struct Node { int item; struct Node* next; };
1062 struct Node* build_list() {
1064 int n = read_number_of_nodes_needed();
1065 if (n <= 0) return 0;
1066 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1067 if (pool == 0) die();
1068 // organize into a linked list...
1069 struct Node* first = pool[0];
1070 for (i = 0; i < n-1; ++i)
1071 pool[i]->next = pool[i+1];
1072 free(pool); // Can now free the array (or not, if it is needed later)
1077 Void_t
** public_iCALLOc(size_t, size_t, Void_t
**);
1079 Void_t
** public_iCALLOc();
1083 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
1085 independent_comalloc allocates, all at once, a set of n_elements
1086 chunks with sizes indicated in the "sizes" array. It returns
1087 an array of pointers to these elements, each of which can be
1088 independently freed, realloc'ed etc. The elements are guaranteed to
1089 be adjacently allocated (this is not guaranteed to occur with
1090 multiple callocs or mallocs), which may also improve cache locality
1091 in some applications.
1093 The "chunks" argument is optional (i.e., may be null). If it is null
1094 the returned array is itself dynamically allocated and should also
1095 be freed when it is no longer needed. Otherwise, the chunks array
1096 must be of at least n_elements in length. It is filled in with the
1097 pointers to the chunks.
1099 In either case, independent_comalloc returns this pointer array, or
1100 null if the allocation failed. If n_elements is zero and chunks is
1101 null, it returns a chunk representing an array with zero elements
1102 (which should be freed if not wanted).
1104 Each element must be individually freed when it is no longer
1105 needed. If you'd like to instead be able to free all at once, you
1106 should instead use a single regular malloc, and assign pointers at
1107 particular offsets in the aggregate space. (In this case though, you
1108 cannot independently free elements.)
1110 independent_comallac differs from independent_calloc in that each
1111 element may have a different size, and also that it does not
1112 automatically clear elements.
1114 independent_comalloc can be used to speed up allocation in cases
1115 where several structs or objects must always be allocated at the
1116 same time. For example:
1121 void send_message(char* msg) {
1122 int msglen = strlen(msg);
1123 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1125 if (independent_comalloc(3, sizes, chunks) == 0)
1127 struct Head* head = (struct Head*)(chunks[0]);
1128 char* body = (char*)(chunks[1]);
1129 struct Foot* foot = (struct Foot*)(chunks[2]);
1133 In general though, independent_comalloc is worth using only for
1134 larger values of n_elements. For small values, you probably won't
1135 detect enough difference from series of malloc calls to bother.
1137 Overuse of independent_comalloc can increase overall memory usage,
1138 since it cannot reuse existing noncontiguous small chunks that
1139 might be available for some of the elements.
1142 Void_t
** public_iCOMALLOc(size_t, size_t*, Void_t
**);
1144 Void_t
** public_iCOMALLOc();
1150 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1151 round up n to nearest pagesize.
1154 Void_t
* public_pVALLOc(size_t);
1156 Void_t
* public_pVALLOc();
1161 Equivalent to free(p).
1163 cfree is needed/defined on some systems that pair it with calloc,
1164 for odd historical reasons (such as: cfree is used in example
1165 code in the first edition of K&R).
1168 void public_cFREe(Void_t
*);
1170 void public_cFREe();
1174 malloc_trim(size_t pad);
1176 If possible, gives memory back to the system (via negative
1177 arguments to sbrk) if there is unused memory at the `high' end of
1178 the malloc pool. You can call this after freeing large blocks of
1179 memory to potentially reduce the system-level memory requirements
1180 of a program. However, it cannot guarantee to reduce memory. Under
1181 some allocation patterns, some large free blocks of memory will be
1182 locked between two used chunks, so they cannot be given back to
1185 The `pad' argument to malloc_trim represents the amount of free
1186 trailing space to leave untrimmed. If this argument is zero,
1187 only the minimum amount of memory to maintain internal data
1188 structures will be left (one page or less). Non-zero arguments
1189 can be supplied to maintain enough trailing space to service
1190 future expected allocations without having to re-obtain memory
1193 Malloc_trim returns 1 if it actually released any memory, else 0.
1194 On systems that do not support "negative sbrks", it will always
1198 int public_mTRIm(size_t);
1204 malloc_usable_size(Void_t* p);
1206 Returns the number of bytes you can actually use in
1207 an allocated chunk, which may be more than you requested (although
1208 often not) due to alignment and minimum size constraints.
1209 You can use this many bytes without worrying about
1210 overwriting other allocated objects. This is not a particularly great
1211 programming practice. malloc_usable_size can be more useful in
1212 debugging and assertions, for example:
1215 assert(malloc_usable_size(p) >= 256);
1219 size_t public_mUSABLe(Void_t
*);
1221 size_t public_mUSABLe();
1226 Prints on stderr the amount of space obtained from the system (both
1227 via sbrk and mmap), the maximum amount (which may be more than
1228 current if malloc_trim and/or munmap got called), and the current
1229 number of bytes allocated via malloc (or realloc, etc) but not yet
1230 freed. Note that this is the number of bytes allocated, not the
1231 number requested. It will be larger than the number requested
1232 because of alignment and bookkeeping overhead. Because it includes
1233 alignment wastage as being in use, this figure may be greater than
1234 zero even when no user-level chunks are allocated.
1236 The reported current and maximum system memory can be inaccurate if
1237 a program makes other calls to system memory allocation functions
1238 (normally sbrk) outside of malloc.
1240 malloc_stats prints only the most commonly interesting statistics.
1241 More information can be obtained by calling mallinfo.
1245 void public_mSTATs(void);
1247 void public_mSTATs();
1250 /* mallopt tuning options */
1253 M_MXFAST is the maximum request size used for "fastbins", special bins
1254 that hold returned chunks without consolidating their spaces. This
1255 enables future requests for chunks of the same size to be handled
1256 very quickly, but can increase fragmentation, and thus increase the
1257 overall memory footprint of a program.
1259 This malloc manages fastbins very conservatively yet still
1260 efficiently, so fragmentation is rarely a problem for values less
1261 than or equal to the default. The maximum supported value of MXFAST
1262 is 80. You wouldn't want it any higher than this anyway. Fastbins
1263 are designed especially for use with many small structs, objects or
1264 strings -- the default handles structs/objects/arrays with sizes up
1265 to 8 4byte fields, or small strings representing words, tokens,
1266 etc. Using fastbins for larger objects normally worsens
1267 fragmentation without improving speed.
1269 M_MXFAST is set in REQUEST size units. It is internally used in
1270 chunksize units, which adds padding and alignment. You can reduce
1271 M_MXFAST to 0 to disable all use of fastbins. This causes the malloc
1272 algorithm to be a closer approximation of fifo-best-fit in all cases,
1273 not just for larger requests, but will generally cause it to be
1278 /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
1283 #ifndef DEFAULT_MXFAST
1284 #define DEFAULT_MXFAST 64
1289 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
1290 to keep before releasing via malloc_trim in free().
1292 Automatic trimming is mainly useful in long-lived programs.
1293 Because trimming via sbrk can be slow on some systems, and can
1294 sometimes be wasteful (in cases where programs immediately
1295 afterward allocate more large chunks) the value should be high
1296 enough so that your overall system performance would improve by
1297 releasing this much memory.
1299 The trim threshold and the mmap control parameters (see below)
1300 can be traded off with one another. Trimming and mmapping are
1301 two different ways of releasing unused memory back to the
1302 system. Between these two, it is often possible to keep
1303 system-level demands of a long-lived program down to a bare
1304 minimum. For example, in one test suite of sessions measuring
1305 the XF86 X server on Linux, using a trim threshold of 128K and a
1306 mmap threshold of 192K led to near-minimal long term resource
1309 If you are using this malloc in a long-lived program, it should
1310 pay to experiment with these values. As a rough guide, you
1311 might set to a value close to the average size of a process
1312 (program) running on your system. Releasing this much memory
1313 would allow such a process to run in memory. Generally, it's
1314 worth it to tune for trimming rather tham memory mapping when a
1315 program undergoes phases where several large chunks are
1316 allocated and released in ways that can reuse each other's
1317 storage, perhaps mixed with phases where there are no such
1318 chunks at all. And in well-behaved long-lived programs,
1319 controlling release of large blocks via trimming versus mapping
1322 However, in most programs, these parameters serve mainly as
1323 protection against the system-level effects of carrying around
1324 massive amounts of unneeded memory. Since frequent calls to
1325 sbrk, mmap, and munmap otherwise degrade performance, the default
1326 parameters are set to relatively high values that serve only as
1329 The trim value It must be greater than page size to have any useful
1330 effect. To disable trimming completely, you can set to
1333 Trim settings interact with fastbin (MXFAST) settings: Unless
1334 TRIM_FASTBINS is defined, automatic trimming never takes place upon
1335 freeing a chunk with size less than or equal to MXFAST. Trimming is
1336 instead delayed until subsequent freeing of larger chunks. However,
1337 you can still force an attempted trim by calling malloc_trim.
1339 Also, trimming is not generally possible in cases where
1340 the main arena is obtained via mmap.
1342 Note that the trick some people use of mallocing a huge space and
1343 then freeing it at program startup, in an attempt to reserve system
1344 memory, doesn't have the intended effect under automatic trimming,
1345 since that memory will immediately be returned to the system.
1348 #define M_TRIM_THRESHOLD -1
1350 #ifndef DEFAULT_TRIM_THRESHOLD
1351 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
1355 M_TOP_PAD is the amount of extra `padding' space to allocate or
1356 retain whenever sbrk is called. It is used in two ways internally:
1358 * When sbrk is called to extend the top of the arena to satisfy
1359 a new malloc request, this much padding is added to the sbrk
1362 * When malloc_trim is called automatically from free(),
1363 it is used as the `pad' argument.
1365 In both cases, the actual amount of padding is rounded
1366 so that the end of the arena is always a system page boundary.
1368 The main reason for using padding is to avoid calling sbrk so
1369 often. Having even a small pad greatly reduces the likelihood
1370 that nearly every malloc request during program start-up (or
1371 after trimming) will invoke sbrk, which needlessly wastes
1374 Automatic rounding-up to page-size units is normally sufficient
1375 to avoid measurable overhead, so the default is 0. However, in
1376 systems where sbrk is relatively slow, it can pay to increase
1377 this value, at the expense of carrying around more memory than
1381 #define M_TOP_PAD -2
1383 #ifndef DEFAULT_TOP_PAD
1384 #define DEFAULT_TOP_PAD (0)
1388 M_MMAP_THRESHOLD is the request size threshold for using mmap()
1389 to service a request. Requests of at least this size that cannot
1390 be allocated using already-existing space will be serviced via mmap.
1391 (If enough normal freed space already exists it is used instead.)
1393 Using mmap segregates relatively large chunks of memory so that
1394 they can be individually obtained and released from the host
1395 system. A request serviced through mmap is never reused by any
1396 other request (at least not directly; the system may just so
1397 happen to remap successive requests to the same locations).
1399 Segregating space in this way has the benefits that:
1401 1. Mmapped space can ALWAYS be individually released back
1402 to the system, which helps keep the system level memory
1403 demands of a long-lived program low.
1404 2. Mapped memory can never become `locked' between
1405 other chunks, as can happen with normally allocated chunks, which
1406 means that even trimming via malloc_trim would not release them.
1407 3. On some systems with "holes" in address spaces, mmap can obtain
1408 memory that sbrk cannot.
1410 However, it has the disadvantages that:
1412 1. The space cannot be reclaimed, consolidated, and then
1413 used to service later requests, as happens with normal chunks.
1414 2. It can lead to more wastage because of mmap page alignment
1416 3. It causes malloc performance to be more dependent on host
1417 system memory management support routines which may vary in
1418 implementation quality and may impose arbitrary
1419 limitations. Generally, servicing a request via normal
1420 malloc steps is faster than going through a system's mmap.
1422 The advantages of mmap nearly always outweigh disadvantages for
1423 "large" chunks, but the value of "large" varies across systems. The
1424 default is an empirically derived value that works well in most
1428 #define M_MMAP_THRESHOLD -3
1430 #ifndef DEFAULT_MMAP_THRESHOLD
1431 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
1435 M_MMAP_MAX is the maximum number of requests to simultaneously
1436 service using mmap. This parameter exists because
1437 . Some systems have a limited number of internal tables for
1438 use by mmap, and using more than a few of them may degrade
1441 The default is set to a value that serves only as a safeguard.
1442 Setting to 0 disables use of mmap for servicing large requests. If
1443 HAVE_MMAP is not set, the default value is 0, and attempts to set it
1444 to non-zero values in mallopt will fail.
1447 #define M_MMAP_MAX -4
1449 #ifndef DEFAULT_MMAP_MAX
1451 #define DEFAULT_MMAP_MAX (65536)
1453 #define DEFAULT_MMAP_MAX (0)
1458 }; /* end of extern "C" */
1462 ========================================================================
1463 To make a fully customizable malloc.h header file, cut everything
1464 above this line, put into file malloc.h, edit to suit, and #include it
1465 on the next line, as well as in programs that use this malloc.
1466 ========================================================================
1469 /* #include "malloc.h" */
1471 /* --------------------- public wrappers ---------------------- */
1473 #ifdef USE_PUBLIC_MALLOC_WRAPPERS
1475 /* Declare all routines as internal */
1477 static Void_t
* mALLOc(size_t);
1478 static void fREe(Void_t
*);
1479 static Void_t
* rEALLOc(Void_t
*, size_t);
1480 static Void_t
* mEMALIGn(size_t, size_t);
1481 static Void_t
* vALLOc(size_t);
1482 static Void_t
* pVALLOc(size_t);
1483 static Void_t
* cALLOc(size_t, size_t);
1484 static Void_t
** iCALLOc(size_t, size_t, Void_t
**);
1485 static Void_t
** iCOMALLOc(size_t, size_t*, Void_t
**);
1486 static void cFREe(Void_t
*);
1487 static int mTRIm(size_t);
1488 static size_t mUSABLe(Void_t
*);
1489 static void mSTATs();
1490 static int mALLOPt(int, int);
1491 static struct mallinfo
mALLINFo(void);
1493 static Void_t
* mALLOc();
1495 static Void_t
* rEALLOc();
1496 static Void_t
* mEMALIGn();
1497 static Void_t
* vALLOc();
1498 static Void_t
* pVALLOc();
1499 static Void_t
* cALLOc();
1500 static Void_t
** iCALLOc();
1501 static Void_t
** iCOMALLOc();
1502 static void cFREe();
1504 static size_t mUSABLe();
1505 static void mSTATs();
1506 static int mALLOPt();
1507 static struct mallinfo
mALLINFo();
1511 MALLOC_PREACTION and MALLOC_POSTACTION should be
1512 defined to return 0 on success, and nonzero on failure.
1513 The return value of MALLOC_POSTACTION is currently ignored
1514 in wrapper functions since there is no reasonable default
1515 action to take on failure.
1519 #ifdef USE_MALLOC_LOCK
1523 static int mALLOC_MUTEx
;
1524 #define MALLOC_PREACTION slwait(&mALLOC_MUTEx)
1525 #define MALLOC_POSTACTION slrelease(&mALLOC_MUTEx)
1529 #include <pthread.h>
1531 static pthread_mutex_t mALLOC_MUTEx
= PTHREAD_MUTEX_INITIALIZER
;
1533 #define MALLOC_PREACTION pthread_mutex_lock(&mALLOC_MUTEx)
1534 #define MALLOC_POSTACTION pthread_mutex_unlock(&mALLOC_MUTEx)
1536 #endif /* USE_MALLOC_LOCK */
1540 /* Substitute anything you like for these */
1542 #define MALLOC_PREACTION (0)
1543 #define MALLOC_POSTACTION (0)
1547 Void_t
* public_mALLOc(size_t bytes
) {
1549 if (MALLOC_PREACTION
!= 0) {
1553 if (MALLOC_POSTACTION
!= 0) {
1558 void public_fREe(Void_t
* m
) {
1559 if (MALLOC_PREACTION
!= 0) {
1563 if (MALLOC_POSTACTION
!= 0) {
1567 Void_t
* public_rEALLOc(Void_t
* m
, size_t bytes
) {
1568 if (MALLOC_PREACTION
!= 0) {
1571 m
= rEALLOc(m
, bytes
);
1572 if (MALLOC_POSTACTION
!= 0) {
1577 Void_t
* public_mEMALIGn(size_t alignment
, size_t bytes
) {
1579 if (MALLOC_PREACTION
!= 0) {
1582 m
= mEMALIGn(alignment
, bytes
);
1583 if (MALLOC_POSTACTION
!= 0) {
1588 Void_t
* public_vALLOc(size_t bytes
) {
1590 if (MALLOC_PREACTION
!= 0) {
1594 if (MALLOC_POSTACTION
!= 0) {
1599 Void_t
* public_pVALLOc(size_t bytes
) {
1601 if (MALLOC_PREACTION
!= 0) {
1605 if (MALLOC_POSTACTION
!= 0) {
1610 Void_t
* public_cALLOc(size_t n
, size_t elem_size
) {
1612 if (MALLOC_PREACTION
!= 0) {
1615 m
= cALLOc(n
, elem_size
);
1616 if (MALLOC_POSTACTION
!= 0) {
1622 Void_t
** public_iCALLOc(size_t n
, size_t elem_size
, Void_t
** chunks
) {
1624 if (MALLOC_PREACTION
!= 0) {
1627 m
= iCALLOc(n
, elem_size
, chunks
);
1628 if (MALLOC_POSTACTION
!= 0) {
1633 Void_t
** public_iCOMALLOc(size_t n
, size_t sizes
[], Void_t
** chunks
) {
1635 if (MALLOC_PREACTION
!= 0) {
1638 m
= iCOMALLOc(n
, sizes
, chunks
);
1639 if (MALLOC_POSTACTION
!= 0) {
1644 void public_cFREe(Void_t
* m
) {
1645 if (MALLOC_PREACTION
!= 0) {
1649 if (MALLOC_POSTACTION
!= 0) {
1653 int public_mTRIm(size_t s
) {
1655 if (MALLOC_PREACTION
!= 0) {
1659 if (MALLOC_POSTACTION
!= 0) {
1664 size_t public_mUSABLe(Void_t
* m
) {
1666 if (MALLOC_PREACTION
!= 0) {
1669 result
= mUSABLe(m
);
1670 if (MALLOC_POSTACTION
!= 0) {
1675 void public_mSTATs() {
1676 if (MALLOC_PREACTION
!= 0) {
1680 if (MALLOC_POSTACTION
!= 0) {
1684 struct mallinfo
public_mALLINFo() {
1686 if (MALLOC_PREACTION
!= 0) {
1687 struct mallinfo nm
= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1691 if (MALLOC_POSTACTION
!= 0) {
1696 int public_mALLOPt(int p
, int v
) {
1698 if (MALLOC_PREACTION
!= 0) {
1701 result
= mALLOPt(p
, v
);
1702 if (MALLOC_POSTACTION
!= 0) {
1711 /* ------------- Optional versions of memcopy ---------------- */
1717 Note: memcpy is ONLY invoked with non-overlapping regions,
1718 so the (usually slower) memmove is not needed.
1721 #define MALLOC_COPY(dest, src, nbytes) memcpy(dest, src, nbytes)
1722 #define MALLOC_ZERO(dest, nbytes) memset(dest, 0, nbytes)
1724 #else /* !USE_MEMCPY */
1726 /* Use Duff's device for good zeroing/copying performance. */
1728 #define MALLOC_ZERO(charp, nbytes) \
1730 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
1731 unsigned long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \
1733 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
1735 case 0: for(;;) { *mzp++ = 0; \
1736 case 7: *mzp++ = 0; \
1737 case 6: *mzp++ = 0; \
1738 case 5: *mzp++ = 0; \
1739 case 4: *mzp++ = 0; \
1740 case 3: *mzp++ = 0; \
1741 case 2: *mzp++ = 0; \
1742 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
1746 define
MALLOC_COPY(dest
,src
,nbytes
) \
1748 INTERNAL_SIZE_T
* mcsrc
= (INTERNAL_SIZE_T
*) src
; \
1749 INTERNAL_SIZE_T
* mcdst
= (INTERNAL_SIZE_T
*) dest
; \
1750 unsigned long mctmp
= (nbytes
)/sizeof(INTERNAL_SIZE_T
); \
1752 if (mctmp
< 8) mcn
= 0; else { mcn
= (mctmp
-1)/8; mctmp
%= 8; } \
1754 case 0: for(;;) { *mcdst
++ = *mcsrc
++; \
1755 case 7: *mcdst
++ = *mcsrc
++; \
1756 case 6: *mcdst
++ = *mcsrc
++; \
1757 case 5: *mcdst
++ = *mcsrc
++; \
1758 case 4: *mcdst
++ = *mcsrc
++; \
1759 case 3: *mcdst
++ = *mcsrc
++; \
1760 case 2: *mcdst
++ = *mcsrc
++; \
1761 case 1: *mcdst
++ = *mcsrc
++; if(mcn
<= 0) break; mcn
--; } \
1767 /* ------------------ MMAP support ------------------ */
1773 #ifndef LACKS_SYS_MMAN_H
1774 #include <sys/mman.h>
1777 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1778 #define MAP_ANONYMOUS MAP_ANON
1782 Nearly all versions of mmap support MAP_ANONYMOUS,
1783 so the following is unlikely to be needed, but is
1784 supplied just in case.
1787 #ifndef MAP_ANONYMOUS
1789 static int dev_zero_fd
= -1; /* Cached file descriptor for /dev/zero. */
1791 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1792 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1793 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1794 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1798 #define MMAP(addr, size, prot, flags) \
1799 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1804 #endif /* HAVE_MMAP */
1808 ----------------------- Chunk representations -----------------------
1813 This struct declaration is misleading (but accurate and necessary).
1814 It declares a "view" into memory allowing access to necessary
1815 fields at known offsets from a given base. See explanation below.
1818 struct malloc_chunk
{
1820 INTERNAL_SIZE_T prev_size
; /* Size of previous chunk (if free). */
1821 INTERNAL_SIZE_T size
; /* Size in bytes, including overhead. */
1823 struct malloc_chunk
* fd
; /* double links -- used only if free. */
1824 struct malloc_chunk
* bk
;
1828 typedef struct malloc_chunk
* mchunkptr
;
1831 malloc_chunk details:
1833 (The following includes lightly edited explanations by Colin Plumb.)
1835 Chunks of memory are maintained using a `boundary tag' method as
1836 described in e.g., Knuth or Standish. (See the paper by Paul
1837 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1838 survey of such techniques.) Sizes of free chunks are stored both
1839 in the front of each chunk and at the end. This makes
1840 consolidating fragmented chunks into bigger chunks very fast. The
1841 size fields also hold bits representing whether chunks are free or
1844 An allocated chunk looks like this:
1847 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1848 | Size of previous chunk, if allocated | |
1849 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1850 | Size of chunk, in bytes |P|
1851 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1852 | User data starts here... .
1854 . (malloc_usable_space() bytes) .
1856 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1858 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1861 Where "chunk" is the front of the chunk for the purpose of most of
1862 the malloc code, but "mem" is the pointer that is returned to the
1863 user. "Nextchunk" is the beginning of the next contiguous chunk.
1865 Chunks always begin on even word boundries, so the mem portion
1866 (which is returned to the user) is also on an even word boundary, and
1867 thus at least double-word aligned.
1869 Free chunks are stored in circular doubly-linked lists, and look like this:
1871 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1872 | Size of previous chunk |
1873 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1874 `head:' | Size of chunk, in bytes |P|
1875 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1876 | Forward pointer to next chunk in list |
1877 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1878 | Back pointer to previous chunk in list |
1879 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1880 | Unused space (may be 0 bytes long) .
1883 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1884 `foot:' | Size of chunk, in bytes |
1885 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1887 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1888 chunk size (which is always a multiple of two words), is an in-use
1889 bit for the *previous* chunk. If that bit is *clear*, then the
1890 word before the current chunk size contains the previous chunk
1891 size, and can be used to find the front of the previous chunk.
1892 The very first chunk allocated always has this bit set,
1893 preventing access to non-existent (or non-owned) memory. If
1894 prev_inuse is set for any given chunk, then you CANNOT determine
1895 the size of the previous chunk, and might even get a memory
1896 addressing fault when trying to do so.
1898 Note that the `foot' of the current chunk is actually represented
1899 as the prev_size of the NEXT chunk. This makes it easier to
1900 deal with alignments etc but can be very confusing when trying
1901 to extend or adapt this code.
1903 The two exceptions to all this are
1905 1. The special chunk `top' doesn't bother using the
1906 trailing size field since there is no next contiguous chunk
1907 that would have to index off it. After initialization, `top'
1908 is forced to always exist. If it would become less than
1909 MINSIZE bytes long, it is replenished.
1911 2. Chunks allocated via mmap, which have the second-lowest-order
1912 bit (IS_MMAPPED) set in their size fields. Because they are
1913 allocated one-by-one, each must contain its own trailing size field.
1918 ---------- Size and alignment checks and conversions ----------
1921 /* conversion from malloc headers to user pointers, and back */
1923 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1924 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1926 /* The smallest possible chunk */
1927 #define MIN_CHUNK_SIZE (sizeof(struct malloc_chunk))
1929 /* The smallest size we can malloc is an aligned minimal chunk */
1932 (unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
1934 /* Check if m has acceptable alignment */
1936 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1940 Check if a request is so large that it would wrap around zero when
1941 padded and aligned. To simplify some other code, the bound is made
1942 low enough so that adding MINSIZE will also not wrap around sero.
1945 #define REQUEST_OUT_OF_RANGE(req) \
1946 ((unsigned long)(req) >= \
1947 (unsigned long)(INTERNAL_SIZE_T)(-2 * MINSIZE))
1949 /* pad request bytes into a usable size -- internal version */
1951 #define request2size(req) \
1952 (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
1954 ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
1956 /* Same, except also perform argument check */
1958 #define checked_request2size(req, sz) \
1959 if (REQUEST_OUT_OF_RANGE(req)) { \
1960 MALLOC_FAILURE_ACTION; \
1963 (sz) = request2size(req);
1966 --------------- Physical chunk operations ---------------
1970 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1971 #define PREV_INUSE 0x1
1973 /* extract inuse bit of previous chunk */
1974 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1977 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1978 #define IS_MMAPPED 0x2
1980 /* check for mmap()'ed chunk */
1981 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1984 Bits to mask off when extracting size
1986 Note: IS_MMAPPED is intentionally not masked off from size field in
1987 macros for which mmapped chunks should never be seen. This should
1988 cause helpful core dumps to occur if it is tried by accident by
1989 people extending or adapting this malloc.
1991 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1993 /* Get size, ignoring use bits */
1994 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1997 /* Ptr to next physical malloc_chunk. */
1998 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
2000 /* Ptr to previous physical malloc_chunk */
2001 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
2003 /* Treat space at ptr + offset as a chunk */
2004 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2006 /* extract p's inuse bit */
2008 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
2010 /* set/clear chunk as being inuse without otherwise disturbing */
2011 #define set_inuse(p)\
2012 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
2014 #define clear_inuse(p)\
2015 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
2018 /* check/set/clear inuse bits in known places */
2019 #define inuse_bit_at_offset(p, s)\
2020 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
2022 #define set_inuse_bit_at_offset(p, s)\
2023 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
2025 #define clear_inuse_bit_at_offset(p, s)\
2026 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
2029 /* Set size at head, without disturbing its use bit */
2030 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
2032 /* Set size/use field */
2033 #define set_head(p, s) ((p)->size = (s))
2035 /* Set size at footer (only when chunk is not in use) */
2036 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
2040 -------------------- Internal data structures --------------------
2042 All internal state is held in an instance of malloc_state defined
2043 below. There are no other static variables, except in two optional
2045 * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above.
2046 * If HAVE_MMAP is true, but mmap doesn't support
2047 MAP_ANONYMOUS, a dummy file descriptor for mmap.
2049 Beware of lots of tricks that minimize the total bookkeeping space
2050 requirements. The result is a little over 1K bytes (for 4byte
2051 pointers and size_t.)
2057 An array of bin headers for free chunks. Each bin is doubly
2058 linked. The bins are approximately proportionally (log) spaced.
2059 There are a lot of these bins (128). This may look excessive, but
2060 works very well in practice. Most bins hold sizes that are
2061 unusual as malloc request sizes, but are more usual for fragments
2062 and consolidated sets of chunks, which is what these bins hold, so
2063 they can be found quickly. All procedures maintain the invariant
2064 that no consolidated chunk physically borders another one, so each
2065 chunk in a list is known to be preceeded and followed by either
2066 inuse chunks or the ends of memory.
2068 Chunks in bins are kept in size order, with ties going to the
2069 approximately least recently used chunk. Ordering isn't needed
2070 for the small bins, which all contain the same-sized chunks, but
2071 facilitates best-fit allocation for larger chunks. These lists
2072 are just sequential. Keeping them in order almost never requires
2073 enough traversal to warrant using fancier ordered data
2076 Chunks of the same size are linked with the most
2077 recently freed at the front, and allocations are taken from the
2078 back. This results in LRU (FIFO) allocation order, which tends
2079 to give each chunk an equal opportunity to be consolidated with
2080 adjacent freed chunks, resulting in larger free chunks and less
2083 To simplify use in double-linked lists, each bin header acts
2084 as a malloc_chunk. This avoids special-casing for headers.
2085 But to conserve space and improve locality, we allocate
2086 only the fd/bk pointers of bins, and then use repositioning tricks
2087 to treat these as the fields of a malloc_chunk*.
2090 typedef struct malloc_chunk
* mbinptr
;
2092 /* addressing -- note that bin_at(0) does not exist */
2093 #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
2095 /* analog of ++bin */
2096 #define next_bin(b) ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
2098 /* Reminders about list directionality within bins */
2099 #define first(b) ((b)->fd)
2100 #define last(b) ((b)->bk)
2102 /* Take a chunk off a bin list */
2103 #define unlink(P, BK, FD) { \
2113 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
2114 8 bytes apart. Larger bins are approximately logarithmically spaced:
2120 4 bins of size 32768
2121 2 bins of size 262144
2122 1 bin of size what's left
2124 There is actually a little bit of slop in the numbers in bin_index
2125 for the sake of speed. This makes no difference elsewhere.
2127 The bins top out around 1MB because we expect to service large
2132 #define NSMALLBINS 64
2133 #define SMALLBIN_WIDTH 8
2134 #define MIN_LARGE_SIZE 512
2136 #define in_smallbin_range(sz) \
2137 ((unsigned long)(sz) < (unsigned long)MIN_LARGE_SIZE)
2139 #define smallbin_index(sz) (((unsigned)(sz)) >> 3)
2141 #define largebin_index(sz) \
2142 (((((unsigned long)(sz)) >> 6) <= 32)? 56 + (((unsigned long)(sz)) >> 6): \
2143 ((((unsigned long)(sz)) >> 9) <= 20)? 91 + (((unsigned long)(sz)) >> 9): \
2144 ((((unsigned long)(sz)) >> 12) <= 10)? 110 + (((unsigned long)(sz)) >> 12): \
2145 ((((unsigned long)(sz)) >> 15) <= 4)? 119 + (((unsigned long)(sz)) >> 15): \
2146 ((((unsigned long)(sz)) >> 18) <= 2)? 124 + (((unsigned long)(sz)) >> 18): \
2149 #define bin_index(sz) \
2150 ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
2156 All remainders from chunk splits, as well as all returned chunks,
2157 are first placed in the "unsorted" bin. They are then placed
2158 in regular bins after malloc gives them ONE chance to be used before
2159 binning. So, basically, the unsorted_chunks list acts as a queue,
2160 with chunks being placed on it in free (and malloc_consolidate),
2161 and taken off (to be either used or placed in bins) in malloc.
2164 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
2165 #define unsorted_chunks(M) (bin_at(M, 1))
2170 The top-most available chunk (i.e., the one bordering the end of
2171 available memory) is treated specially. It is never included in
2172 any bin, is used only if no other chunk is available, and is
2173 released back to the system if it is very large (see
2174 M_TRIM_THRESHOLD). Because top initially
2175 points to its own bin with initial zero size, thus forcing
2176 extension on the first malloc request, we avoid having any special
2177 code in malloc to check whether it even exists yet. But we still
2178 need to do so when getting memory from system, so we make
2179 initial_top treat the bin as a legal but unusable chunk during the
2180 interval between initialization and the first call to
2181 sYSMALLOc. (This is somewhat delicate, since it relies on
2182 the 2 preceding words to be zero during this interval as well.)
2185 /* Conveniently, the unsorted bin can be used as dummy top on first call */
2186 #define initial_top(M) (unsorted_chunks(M))
2191 To help compensate for the large number of bins, a one-level index
2192 structure is used for bin-by-bin searching. `binmap' is a
2193 bitvector recording whether bins are definitely empty so they can
2194 be skipped over during during traversals. The bits are NOT always
2195 cleared as soon as bins are empty, but instead only
2196 when they are noticed to be empty during traversal in malloc.
2199 /* Conservatively use 32 bits per map word, even if on 64bit system */
2200 #define BINMAPSHIFT 5
2201 #define BITSPERMAP (1U << BINMAPSHIFT)
2202 #define BINMAPSIZE (NBINS / BITSPERMAP)
2204 #define idx2block(i) ((i) >> BINMAPSHIFT)
2205 #define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
2207 #define mark_bin(m,i) ((m)->binmap[idx2block(i)] |= idx2bit(i))
2208 #define unmark_bin(m,i) ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
2209 #define get_binmap(m,i) ((m)->binmap[idx2block(i)] & idx2bit(i))
2214 An array of lists holding recently freed small chunks. Fastbins
2215 are not doubly linked. It is faster to single-link them, and
2216 since chunks are never removed from the middles of these lists,
2217 double linking is not necessary. Also, unlike regular bins, they
2218 are not even processed in FIFO order (they use faster LIFO) since
2219 ordering doesn't much matter in the transient contexts in which
2220 fastbins are normally used.
2222 Chunks in fastbins keep their inuse bit set, so they cannot
2223 be consolidated with other free chunks. malloc_consolidate
2224 releases all chunks in fastbins and consolidates them with
2228 typedef struct malloc_chunk
* mfastbinptr
;
2230 /* offset 2 to use otherwise unindexable first 2 bins */
2231 #define fastbin_index(sz) ((((unsigned int)(sz)) >> 3) - 2)
2233 /* The maximum fastbin request size we support */
2234 #define MAX_FAST_SIZE 80
2236 #define NFASTBINS (fastbin_index(request2size(MAX_FAST_SIZE))+1)
2239 FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
2240 that triggers automatic consolidation of possibly-surrounding
2241 fastbin chunks. This is a heuristic, so the exact value should not
2242 matter too much. It is defined at half the default trim threshold as a
2243 compromise heuristic to only attempt consolidation if it is likely
2244 to lead to trimming. However, it is not dynamically tunable, since
2245 consolidation reduces fragmentation surrounding loarge chunks even
2246 if trimming is not used.
2249 #define FASTBIN_CONSOLIDATION_THRESHOLD (65536UL)
2252 Since the lowest 2 bits in max_fast don't matter in size comparisons,
2253 they are used as flags.
2257 FASTCHUNKS_BIT held in max_fast indicates that there are probably
2258 some fastbin chunks. It is set true on entering a chunk into any
2259 fastbin, and cleared only in malloc_consolidate.
2261 The truth value is inverted so that have_fastchunks will be true
2262 upon startup (since statics are zero-filled), simplifying
2263 initialization checks.
2266 #define FASTCHUNKS_BIT (1U)
2268 #define have_fastchunks(M) (((M)->max_fast & FASTCHUNKS_BIT) == 0)
2269 #define clear_fastchunks(M) ((M)->max_fast |= FASTCHUNKS_BIT)
2270 #define set_fastchunks(M) ((M)->max_fast &= ~FASTCHUNKS_BIT)
2273 NONCONTIGUOUS_BIT indicates that MORECORE does not return contiguous
2274 regions. Otherwise, contiguity is exploited in merging together,
2275 when possible, results from consecutive MORECORE calls.
2277 The initial value comes from MORECORE_CONTIGUOUS, but is
2278 changed dynamically if mmap is ever used as an sbrk substitute.
2281 #define NONCONTIGUOUS_BIT (2U)
2283 #define contiguous(M) (((M)->max_fast & NONCONTIGUOUS_BIT) == 0)
2284 #define noncontiguous(M) (((M)->max_fast & NONCONTIGUOUS_BIT) != 0)
2285 #define set_noncontiguous(M) ((M)->max_fast |= NONCONTIGUOUS_BIT)
2286 #define set_contiguous(M) ((M)->max_fast &= ~NONCONTIGUOUS_BIT)
2289 Set value of max_fast.
2290 Use impossibly small value if 0.
2291 Precondition: there are no existing fastbin chunks.
2292 Setting the value clears fastchunk bit but preserves noncontiguous bit.
2295 #define set_max_fast(M, s) \
2296 (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \
2298 ((M)->max_fast & NONCONTIGUOUS_BIT)
2302 ----------- Internal state representation and initialization -----------
2305 struct malloc_state
{
2307 /* The maximum chunk size to be eligible for fastbin */
2308 INTERNAL_SIZE_T max_fast
; /* low 2 bits used as flags */
2311 mfastbinptr fastbins
[NFASTBINS
];
2313 /* Base of the topmost chunk -- not otherwise kept in a bin */
2316 /* The remainder from the most recent split of a small request */
2317 mchunkptr last_remainder
;
2319 /* Normal bins packed as described above */
2320 mchunkptr bins
[NBINS
* 2];
2322 /* Bitmap of bins */
2323 unsigned int binmap
[BINMAPSIZE
];
2325 /* Tunable parameters */
2326 unsigned long trim_threshold
;
2327 INTERNAL_SIZE_T top_pad
;
2328 INTERNAL_SIZE_T mmap_threshold
;
2330 /* Memory map support */
2335 /* Cache malloc_getpagesize */
2336 unsigned int pagesize
;
2339 INTERNAL_SIZE_T mmapped_mem
;
2340 INTERNAL_SIZE_T sbrked_mem
;
2341 INTERNAL_SIZE_T max_sbrked_mem
;
2342 INTERNAL_SIZE_T max_mmapped_mem
;
2343 INTERNAL_SIZE_T max_total_mem
;
2346 typedef struct malloc_state
*mstate
;
2349 There is exactly one instance of this struct in this malloc.
2350 If you are adapting this malloc in a way that does NOT use a static
2351 malloc_state, you MUST explicitly zero-fill it before using. This
2352 malloc relies on the property that malloc_state is initialized to
2353 all zeroes (as is true of C statics).
2356 static struct malloc_state av_
; /* never directly referenced */
2359 All uses of av_ are via get_malloc_state().
2360 At most one "call" to get_malloc_state is made per invocation of
2361 the public versions of malloc and free, but other routines
2362 that in turn invoke malloc and/or free may call more then once.
2363 Also, it is called in check* routines if DEBUG is set.
2366 #define get_malloc_state() (&(av_))
2369 Initialize a malloc_state struct.
2371 This is called only from within malloc_consolidate, which needs
2372 be called in the same contexts anyway. It is never called directly
2373 outside of malloc_consolidate because some optimizing compilers try
2374 to inline it at all call points, which turns out not to be an
2375 optimization at all. (Inlining it in malloc_consolidate is fine though.)
2379 static void malloc_init_state(mstate av
)
2381 static void malloc_init_state(av
) mstate av
;
2387 /* Establish circular links for normal bins */
2388 for (i
= 1; i
< NBINS
; ++i
) {
2390 bin
->fd
= bin
->bk
= bin
;
2393 av
->top_pad
= DEFAULT_TOP_PAD
;
2394 av
->n_mmaps_max
= DEFAULT_MMAP_MAX
;
2395 av
->mmap_threshold
= DEFAULT_MMAP_THRESHOLD
;
2396 av
->trim_threshold
= DEFAULT_TRIM_THRESHOLD
;
2398 #if !MORECORE_CONTIGUOUS
2399 set_noncontiguous(av
);
2402 set_max_fast(av
, DEFAULT_MXFAST
);
2404 av
->top
= initial_top(av
);
2405 av
->pagesize
= malloc_getpagesize
;
2409 Other internal utilities operating on mstates
2413 static Void_t
* sYSMALLOc(INTERNAL_SIZE_T
, mstate
);
2414 static int sYSTRIm(size_t, mstate
);
2415 static void malloc_consolidate(mstate
);
2416 static Void_t
** iALLOc(size_t, size_t*, int, Void_t
**);
2418 static Void_t
* sYSMALLOc();
2419 static int sYSTRIm();
2420 static void malloc_consolidate();
2421 static Void_t
** iALLOc();
2427 These routines make a number of assertions about the states
2428 of data structures that should be true at all times. If any
2429 are not true, it's very likely that a user program has somehow
2430 trashed memory. (It's also possible that there is a coding error
2431 in malloc. In which case, please report it!)
2436 #define check_chunk(P)
2437 #define check_free_chunk(P)
2438 #define check_inuse_chunk(P)
2439 #define check_remalloced_chunk(P,N)
2440 #define check_malloced_chunk(P,N)
2441 #define check_malloc_state()
2444 #define check_chunk(P) do_check_chunk(P)
2445 #define check_free_chunk(P) do_check_free_chunk(P)
2446 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
2447 #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
2448 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
2449 #define check_malloc_state() do_check_malloc_state()
2452 Properties of all chunks
2456 static void do_check_chunk(mchunkptr p
)
2458 static void do_check_chunk(p
) mchunkptr p
;
2461 mstate av
= get_malloc_state();
2462 unsigned long sz
= chunksize(p
);
2463 /* min and max possible addresses assuming contiguous allocation */
2464 char* max_address
= (char*)(av
->top
) + chunksize(av
->top
);
2465 char* min_address
= max_address
- av
->sbrked_mem
;
2467 if (!chunk_is_mmapped(p
)) {
2469 /* Has legal address ... */
2471 if (contiguous(av
)) {
2472 assert(((char*)p
) >= min_address
);
2473 assert(((char*)p
+ sz
) <= ((char*)(av
->top
)));
2477 /* top size is always at least MINSIZE */
2478 assert((unsigned long)(sz
) >= MINSIZE
);
2479 /* top predecessor always marked inuse */
2480 assert(prev_inuse(p
));
2486 /* address is outside main heap */
2487 if (contiguous(av
) && av
->top
!= initial_top(av
)) {
2488 assert(((char*)p
) < min_address
|| ((char*)p
) > max_address
);
2490 /* chunk is page-aligned */
2491 assert(((p
->prev_size
+ sz
) & (av
->pagesize
-1)) == 0);
2492 /* mem is aligned */
2493 assert(aligned_OK(chunk2mem(p
)));
2495 /* force an appropriate assert violation if debug set */
2496 assert(!chunk_is_mmapped(p
));
2502 Properties of free chunks
2506 static void do_check_free_chunk(mchunkptr p
)
2508 static void do_check_free_chunk(p
) mchunkptr p
;
2511 mstate av
= get_malloc_state();
2513 INTERNAL_SIZE_T sz
= p
->size
& ~PREV_INUSE
;
2514 mchunkptr next
= chunk_at_offset(p
, sz
);
2518 /* Chunk must claim to be free ... */
2520 assert (!chunk_is_mmapped(p
));
2522 /* Unless a special marker, must have OK fields */
2523 if ((unsigned long)(sz
) >= MINSIZE
)
2525 assert((sz
& MALLOC_ALIGN_MASK
) == 0);
2526 assert(aligned_OK(chunk2mem(p
)));
2527 /* ... matching footer field */
2528 assert(next
->prev_size
== sz
);
2529 /* ... and is fully consolidated */
2530 assert(prev_inuse(p
));
2531 assert (next
== av
->top
|| inuse(next
));
2533 /* ... and has minimally sane links */
2534 assert(p
->fd
->bk
== p
);
2535 assert(p
->bk
->fd
== p
);
2537 else /* markers are always of size SIZE_SZ */
2538 assert(sz
== SIZE_SZ
);
2542 Properties of inuse chunks
2546 static void do_check_inuse_chunk(mchunkptr p
)
2548 static void do_check_inuse_chunk(p
) mchunkptr p
;
2551 mstate av
= get_malloc_state();
2555 if (chunk_is_mmapped(p
))
2556 return; /* mmapped chunks have no next/prev */
2558 /* Check whether it claims to be in use ... */
2561 next
= next_chunk(p
);
2563 /* ... and is surrounded by OK chunks.
2564 Since more things can be checked with free chunks than inuse ones,
2565 if an inuse chunk borders them and debug is on, it's worth doing them.
2567 if (!prev_inuse(p
)) {
2568 /* Note that we cannot even look at prev unless it is not inuse */
2569 mchunkptr prv
= prev_chunk(p
);
2570 assert(next_chunk(prv
) == p
);
2571 do_check_free_chunk(prv
);
2574 if (next
== av
->top
) {
2575 assert(prev_inuse(next
));
2576 assert(chunksize(next
) >= MINSIZE
);
2578 else if (!inuse(next
))
2579 do_check_free_chunk(next
);
2583 Properties of chunks recycled from fastbins
2587 static void do_check_remalloced_chunk(mchunkptr p
, INTERNAL_SIZE_T s
)
2589 static void do_check_remalloced_chunk(p
, s
) mchunkptr p
; INTERNAL_SIZE_T s
;
2592 INTERNAL_SIZE_T sz
= p
->size
& ~PREV_INUSE
;
2594 do_check_inuse_chunk(p
);
2596 /* Legal size ... */
2597 assert((sz
& MALLOC_ALIGN_MASK
) == 0);
2598 assert((unsigned long)(sz
) >= MINSIZE
);
2599 /* ... and alignment */
2600 assert(aligned_OK(chunk2mem(p
)));
2601 /* chunk is less than MINSIZE more than request */
2602 assert((long)(sz
) - (long)(s
) >= 0);
2603 assert((long)(sz
) - (long)(s
+ MINSIZE
) < 0);
2607 Properties of nonrecycled chunks at the point they are malloced
2611 static void do_check_malloced_chunk(mchunkptr p
, INTERNAL_SIZE_T s
)
2613 static void do_check_malloced_chunk(p
, s
) mchunkptr p
; INTERNAL_SIZE_T s
;
2616 /* same as recycled case ... */
2617 do_check_remalloced_chunk(p
, s
);
2620 ... plus, must obey implementation invariant that prev_inuse is
2621 always true of any allocated chunk; i.e., that each allocated
2622 chunk borders either a previously allocated and still in-use
2623 chunk, or the base of its memory arena. This is ensured
2624 by making all allocations from the the `lowest' part of any found
2625 chunk. This does not necessarily hold however for chunks
2626 recycled via fastbins.
2629 assert(prev_inuse(p
));
2634 Properties of malloc_state.
2636 This may be useful for debugging malloc, as well as detecting user
2637 programmer errors that somehow write into malloc_state.
2639 If you are extending or experimenting with this malloc, you can
2640 probably figure out how to hack this routine to print out or
2641 display chunk addresses, sizes, bins, and other instrumentation.
2644 static void do_check_malloc_state(void)
2646 mstate av
= get_malloc_state();
2651 unsigned int binbit
;
2654 INTERNAL_SIZE_T size
;
2655 unsigned long total
= 0;
2658 /* internal size_t must be no wider than pointer type */
2659 assert(sizeof(INTERNAL_SIZE_T
) <= sizeof(char*));
2661 /* alignment is a power of 2 */
2662 assert((MALLOC_ALIGNMENT
& (MALLOC_ALIGNMENT
-1)) == 0);
2664 /* cannot run remaining checks until fully initialized */
2665 if (av
->top
== 0 || av
->top
== initial_top(av
))
2668 /* pagesize is a power of 2 */
2669 assert((av
->pagesize
& (av
->pagesize
-1)) == 0);
2671 /* properties of fastbins */
2673 /* max_fast is in allowed range */
2674 assert((av
->max_fast
& ~1) <= request2size(MAX_FAST_SIZE
));
2676 max_fast_bin
= fastbin_index(av
->max_fast
);
2678 for (i
= 0; i
< NFASTBINS
; ++i
) {
2679 p
= av
->fastbins
[i
];
2681 /* all bins past max_fast are empty */
2682 if (i
> max_fast_bin
)
2686 /* each chunk claims to be inuse */
2687 do_check_inuse_chunk(p
);
2688 total
+= chunksize(p
);
2689 /* chunk belongs in this bin */
2690 assert(fastbin_index(chunksize(p
)) == i
);
2696 assert(have_fastchunks(av
));
2697 else if (!have_fastchunks(av
))
2700 /* check normal bins */
2701 for (i
= 1; i
< NBINS
; ++i
) {
2704 /* binmap is accurate (except for bin 1 == unsorted_chunks) */
2706 binbit
= get_binmap(av
,i
);
2707 empty
= last(b
) == b
;
2714 for (p
= last(b
); p
!= b
; p
= p
->bk
) {
2715 /* each chunk claims to be free */
2716 do_check_free_chunk(p
);
2717 size
= chunksize(p
);
2720 /* chunk belongs in bin */
2721 idx
= bin_index(size
);
2723 /* lists are sorted */
2724 assert(p
->bk
== b
||
2725 (unsigned long)chunksize(p
->bk
) >= (unsigned long)chunksize(p
));
2727 /* chunk is followed by a legal chain of inuse chunks */
2728 for (q
= next_chunk(p
);
2729 (q
!= av
->top
&& inuse(q
) &&
2730 (unsigned long)(chunksize(q
)) >= MINSIZE
);
2732 do_check_inuse_chunk(q
);
2736 /* top chunk is OK */
2737 check_chunk(av
->top
);
2739 /* sanity checks for statistics */
2741 assert(total
<= (unsigned long)(av
->max_total_mem
));
2742 assert(av
->n_mmaps
>= 0);
2743 assert(av
->n_mmaps
<= av
->n_mmaps_max
);
2744 assert(av
->n_mmaps
<= av
->max_n_mmaps
);
2746 assert((unsigned long)(av
->sbrked_mem
) <=
2747 (unsigned long)(av
->max_sbrked_mem
));
2749 assert((unsigned long)(av
->mmapped_mem
) <=
2750 (unsigned long)(av
->max_mmapped_mem
));
2752 assert((unsigned long)(av
->max_total_mem
) >=
2753 (unsigned long)(av
->mmapped_mem
) + (unsigned long)(av
->sbrked_mem
));
2758 /* ----------- Routines dealing with system allocation -------------- */
2761 sysmalloc handles malloc cases requiring more memory from the system.
2762 On entry, it is assumed that av->top does not have enough
2763 space to service request for nb bytes, thus requiring that av->top
2764 be extended or replaced.
2768 static Void_t
* sYSMALLOc(INTERNAL_SIZE_T nb
, mstate av
)
2770 static Void_t
* sYSMALLOc(nb
, av
) INTERNAL_SIZE_T nb
; mstate av
;
2773 mchunkptr old_top
; /* incoming value of av->top */
2774 INTERNAL_SIZE_T old_size
; /* its size */
2775 char* old_end
; /* its end address */
2777 long size
; /* arg to first MORECORE or mmap call */
2778 char* brk
; /* return value from MORECORE */
2780 long correction
; /* arg to 2nd MORECORE call */
2781 char* snd_brk
; /* 2nd return val */
2783 INTERNAL_SIZE_T front_misalign
; /* unusable bytes at front of new space */
2784 INTERNAL_SIZE_T end_misalign
; /* partial page left at end of new space */
2785 char* aligned_brk
; /* aligned offset into brk */
2787 mchunkptr p
; /* the allocated/returned chunk */
2788 mchunkptr remainder
; /* remainder from allocation */
2789 unsigned long remainder_size
; /* its size */
2791 unsigned long sum
; /* for updating stats */
2793 size_t pagemask
= av
->pagesize
- 1;
2799 If have mmap, and the request size meets the mmap threshold, and
2800 the system supports mmap, and there are few enough currently
2801 allocated mmapped regions, try to directly map this request
2802 rather than expanding top.
2805 if ((unsigned long)(nb
) >= (unsigned long)(av
->mmap_threshold
) &&
2806 (av
->n_mmaps
< av
->n_mmaps_max
)) {
2808 char* mm
; /* return value from mmap call*/
2811 Round up size to nearest page. For mmapped chunks, the overhead
2812 is one SIZE_SZ unit larger than for normal chunks, because there
2813 is no following chunk whose prev_size field could be used.
2815 size
= (nb
+ SIZE_SZ
+ MALLOC_ALIGN_MASK
+ pagemask
) & ~pagemask
;
2817 /* Don't try if size wraps around 0 */
2818 if ((unsigned long)(size
) > (unsigned long)(nb
)) {
2820 mm
= (char*)(MMAP(0, size
, PROT_READ
|PROT_WRITE
, MAP_PRIVATE
));
2822 if (mm
!= (char*)(MORECORE_FAILURE
)) {
2825 The offset to the start of the mmapped region is stored
2826 in the prev_size field of the chunk. This allows us to adjust
2827 returned start address to meet alignment requirements here
2828 and in memalign(), and still be able to compute proper
2829 address argument for later munmap in free() and realloc().
2832 front_misalign
= (INTERNAL_SIZE_T
)chunk2mem(mm
) & MALLOC_ALIGN_MASK
;
2833 if (front_misalign
> 0) {
2834 correction
= MALLOC_ALIGNMENT
- front_misalign
;
2835 p
= (mchunkptr
)(mm
+ correction
);
2836 p
->prev_size
= correction
;
2837 set_head(p
, (size
- correction
) |IS_MMAPPED
);
2841 set_head(p
, size
|IS_MMAPPED
);
2844 /* update statistics */
2846 if (++av
->n_mmaps
> av
->max_n_mmaps
)
2847 av
->max_n_mmaps
= av
->n_mmaps
;
2849 sum
= av
->mmapped_mem
+= size
;
2850 if (sum
> (unsigned long)(av
->max_mmapped_mem
))
2851 av
->max_mmapped_mem
= sum
;
2852 sum
+= av
->sbrked_mem
;
2853 if (sum
> (unsigned long)(av
->max_total_mem
))
2854 av
->max_total_mem
= sum
;
2858 return chunk2mem(p
);
2864 /* Record incoming configuration of top */
2867 old_size
= chunksize(old_top
);
2868 old_end
= (char*)(chunk_at_offset(old_top
, old_size
));
2870 brk
= snd_brk
= (char*)(MORECORE_FAILURE
);
2873 If not the first time through, we require old_size to be
2874 at least MINSIZE and to have prev_inuse set.
2877 assert((old_top
== initial_top(av
) && old_size
== 0) ||
2878 ((unsigned long) (old_size
) >= MINSIZE
&&
2879 prev_inuse(old_top
)));
2881 /* Precondition: not enough current space to satisfy nb request */
2882 assert((unsigned long)(old_size
) < (unsigned long)(nb
+ MINSIZE
));
2884 /* Precondition: all fastbins are consolidated */
2885 assert(!have_fastchunks(av
));
2888 /* Request enough space for nb + pad + overhead */
2890 size
= nb
+ av
->top_pad
+ MINSIZE
;
2893 If contiguous, we can subtract out existing space that we hope to
2894 combine with new space. We add it back later only if
2895 we don't actually get contiguous space.
2902 Round to a multiple of page size.
2903 If MORECORE is not contiguous, this ensures that we only call it
2904 with whole-page arguments. And if MORECORE is contiguous and
2905 this is not first time through, this preserves page-alignment of
2906 previous calls. Otherwise, we correct to page-align below.
2909 size
= (size
+ pagemask
) & ~pagemask
;
2912 Don't try to call MORECORE if argument is so big as to appear
2913 negative. Note that since mmap takes size_t arg, it may succeed
2914 below even if we cannot call MORECORE.
2918 brk
= (char*)(MORECORE(size
));
2921 If have mmap, try using it as a backup when MORECORE fails or
2922 cannot be used. This is worth doing on systems that have "holes" in
2923 address space, so sbrk cannot extend to give contiguous space, but
2924 space is available elsewhere. Note that we ignore mmap max count
2925 and threshold limits, since the space will not be used as a
2926 segregated mmap region.
2930 if (brk
== (char*)(MORECORE_FAILURE
)) {
2932 /* Cannot merge with old top, so add its size back in */
2934 size
= (size
+ old_size
+ pagemask
) & ~pagemask
;
2936 /* If we are relying on mmap as backup, then use larger units */
2937 if ((unsigned long)(size
) < (unsigned long)(MMAP_AS_MORECORE_SIZE
))
2938 size
= MMAP_AS_MORECORE_SIZE
;
2940 /* Don't try if size wraps around 0 */
2941 if ((unsigned long)(size
) > (unsigned long)(nb
)) {
2943 brk
= (char*)(MMAP(0, size
, PROT_READ
|PROT_WRITE
, MAP_PRIVATE
));
2945 if (brk
!= (char*)(MORECORE_FAILURE
)) {
2947 /* We do not need, and cannot use, another sbrk call to find end */
2948 snd_brk
= brk
+ size
;
2951 Record that we no longer have a contiguous sbrk region.
2952 After the first time mmap is used as backup, we do not
2953 ever rely on contiguous space since this could incorrectly
2956 set_noncontiguous(av
);
2962 if (brk
!= (char*)(MORECORE_FAILURE
)) {
2963 av
->sbrked_mem
+= size
;
2966 If MORECORE extends previous space, we can likewise extend top size.
2969 if (brk
== old_end
&& snd_brk
== (char*)(MORECORE_FAILURE
)) {
2970 set_head(old_top
, (size
+ old_size
) | PREV_INUSE
);
2974 Otherwise, make adjustments:
2976 * If the first time through or noncontiguous, we need to call sbrk
2977 just to find out where the end of memory lies.
2979 * We need to ensure that all returned chunks from malloc will meet
2982 * If there was an intervening foreign sbrk, we need to adjust sbrk
2983 request size to account for fact that we will not be able to
2984 combine new space with existing space in old_top.
2986 * Almost all systems internally allocate whole pages at a time, in
2987 which case we might as well use the whole last page of request.
2988 So we allocate enough more memory to hit a page boundary now,
2989 which in turn causes future contiguous calls to page-align.
2998 /* handle contiguous cases */
2999 if (contiguous(av
)) {
3001 /* Guarantee alignment of first new chunk made from this space */
3003 front_misalign
= (INTERNAL_SIZE_T
)chunk2mem(brk
) & MALLOC_ALIGN_MASK
;
3004 if (front_misalign
> 0) {
3007 Skip over some bytes to arrive at an aligned position.
3008 We don't need to specially mark these wasted front bytes.
3009 They will never be accessed anyway because
3010 prev_inuse of av->top (and any chunk created from its start)
3011 is always true after initialization.
3014 correction
= MALLOC_ALIGNMENT
- front_misalign
;
3015 aligned_brk
+= correction
;
3019 If this isn't adjacent to existing space, then we will not
3020 be able to merge with old_top space, so must add to 2nd request.
3023 correction
+= old_size
;
3025 /* Extend the end address to hit a page boundary */
3026 end_misalign
= (INTERNAL_SIZE_T
)(brk
+ size
+ correction
);
3027 correction
+= ((end_misalign
+ pagemask
) & ~pagemask
) - end_misalign
;
3029 assert(correction
>= 0);
3030 snd_brk
= (char*)(MORECORE(correction
));
3033 If can't allocate correction, try to at least find out current
3034 brk. It might be enough to proceed without failing.
3036 Note that if second sbrk did NOT fail, we assume that space
3037 is contiguous with first sbrk. This is a safe assumption unless
3038 program is multithreaded but doesn't use locks and a foreign sbrk
3039 occurred between our first and second calls.
3042 if (snd_brk
== (char*)(MORECORE_FAILURE
)) {
3044 snd_brk
= (char*)(MORECORE(0));
3048 /* handle non-contiguous cases */
3050 /* MORECORE/mmap must correctly align */
3051 assert(((unsigned long)chunk2mem(brk
) & MALLOC_ALIGN_MASK
) == 0);
3053 /* Find out current end of memory */
3054 if (snd_brk
== (char*)(MORECORE_FAILURE
)) {
3055 snd_brk
= (char*)(MORECORE(0));
3059 /* Adjust top based on results of second sbrk */
3060 if (snd_brk
!= (char*)(MORECORE_FAILURE
)) {
3061 av
->top
= (mchunkptr
)aligned_brk
;
3062 set_head(av
->top
, (snd_brk
- aligned_brk
+ correction
) | PREV_INUSE
);
3063 av
->sbrked_mem
+= correction
;
3066 If not the first time through, we either have a
3067 gap due to foreign sbrk or a non-contiguous region. Insert a
3068 double fencepost at old_top to prevent consolidation with space
3069 we don't own. These fenceposts are artificial chunks that are
3070 marked as inuse and are in any case too small to use. We need
3071 two to make sizes and alignments work out.
3074 if (old_size
!= 0) {
3076 Shrink old_top to insert fenceposts, keeping size a
3077 multiple of MALLOC_ALIGNMENT. We know there is at least
3078 enough space in old_top to do this.
3080 old_size
= (old_size
- 3*SIZE_SZ
) & ~MALLOC_ALIGN_MASK
;
3081 set_head(old_top
, old_size
| PREV_INUSE
);
3084 Note that the following assignments completely overwrite
3085 old_top when old_size was previously MINSIZE. This is
3086 intentional. We need the fencepost, even if old_top otherwise gets
3089 chunk_at_offset(old_top
, old_size
)->size
=
3092 chunk_at_offset(old_top
, old_size
+ SIZE_SZ
)->size
=
3095 /* If possible, release the rest. */
3096 if (old_size
>= MINSIZE
) {
3097 fREe(chunk2mem(old_top
));
3104 /* Update statistics */
3105 sum
= av
->sbrked_mem
;
3106 if (sum
> (unsigned long)(av
->max_sbrked_mem
))
3107 av
->max_sbrked_mem
= sum
;
3109 sum
+= av
->mmapped_mem
;
3110 if (sum
> (unsigned long)(av
->max_total_mem
))
3111 av
->max_total_mem
= sum
;
3113 check_malloc_state();
3115 /* finally, do the allocation */
3117 size
= chunksize(p
);
3119 /* check that one of the above allocation paths succeeded */
3120 if ((unsigned long)(size
) >= (unsigned long)(nb
+ MINSIZE
)) {
3121 remainder_size
= size
- nb
;
3122 remainder
= chunk_at_offset(p
, nb
);
3123 av
->top
= remainder
;
3124 set_head(p
, nb
| PREV_INUSE
);
3125 set_head(remainder
, remainder_size
| PREV_INUSE
);
3126 check_malloced_chunk(p
, nb
);
3127 return chunk2mem(p
);
3131 /* catch all failure paths */
3132 MALLOC_FAILURE_ACTION
;
3138 sYSTRIm is an inverse of sorts to sYSMALLOc. It gives memory back
3139 to the system (via negative arguments to sbrk) if there is unused
3140 memory at the `high' end of the malloc pool. It is called
3141 automatically by free() when top space exceeds the trim
3142 threshold. It is also called by the public malloc_trim routine. It
3143 returns 1 if it actually released any memory, else 0.
3147 static int sYSTRIm(size_t pad
, mstate av
)
3149 static int sYSTRIm(pad
, av
) size_t pad
; mstate av
;
3152 long top_size
; /* Amount of top-most memory */
3153 long extra
; /* Amount to release */
3154 long released
; /* Amount actually released */
3155 char* current_brk
; /* address returned by pre-check sbrk call */
3156 char* new_brk
; /* address returned by post-check sbrk call */
3159 pagesz
= av
->pagesize
;
3160 top_size
= chunksize(av
->top
);
3162 /* Release in pagesize units, keeping at least one page */
3163 extra
= ((top_size
- pad
- MINSIZE
+ (pagesz
-1)) / pagesz
- 1) * pagesz
;
3168 Only proceed if end of memory is where we last set it.
3169 This avoids problems if there were foreign sbrk calls.
3171 current_brk
= (char*)(MORECORE(0));
3172 if (current_brk
== (char*)(av
->top
) + top_size
) {
3175 Attempt to release memory. We ignore MORECORE return value,
3176 and instead call again to find out where new end of memory is.
3177 This avoids problems if first call releases less than we asked,
3178 of if failure somehow altered brk value. (We could still
3179 encounter problems if it altered brk in some very bad way,
3180 but the only thing we can do is adjust anyway, which will cause
3181 some downstream failure.)
3185 new_brk
= (char*)(MORECORE(0));
3187 if (new_brk
!= (char*)MORECORE_FAILURE
) {
3188 released
= (long)(current_brk
- new_brk
);
3190 if (released
!= 0) {
3191 /* Success. Adjust top. */
3192 av
->sbrked_mem
-= released
;
3193 set_head(av
->top
, (top_size
- released
) | PREV_INUSE
);
3194 check_malloc_state();
3204 ------------------------------ malloc ------------------------------
3208 Void_t
* mALLOc(size_t bytes
)
3210 Void_t
* mALLOc(bytes
) size_t bytes
;
3213 mstate av
= get_malloc_state();
3215 INTERNAL_SIZE_T nb
; /* normalized request size */
3216 unsigned int idx
; /* associated bin index */
3217 mbinptr bin
; /* associated bin */
3218 mfastbinptr
* fb
; /* associated fastbin */
3220 mchunkptr victim
; /* inspected/selected chunk */
3221 INTERNAL_SIZE_T size
; /* its size */
3222 int victim_index
; /* its bin index */
3224 mchunkptr remainder
; /* remainder from a split */
3225 unsigned long remainder_size
; /* its size */
3227 unsigned int block
; /* bit map traverser */
3228 unsigned int bit
; /* bit map traverser */
3229 unsigned int map
; /* current word of binmap */
3231 mchunkptr fwd
; /* misc temp for linking */
3232 mchunkptr bck
; /* misc temp for linking */
3235 Convert request size to internal form by adding SIZE_SZ bytes
3236 overhead plus possibly more to obtain necessary alignment and/or
3237 to obtain a size of at least MINSIZE, the smallest allocatable
3238 size. Also, checked_request2size traps (returning 0) request sizes
3239 that are so large that they wrap around zero when padded and
3243 checked_request2size(bytes
, nb
);
3246 If the size qualifies as a fastbin, first check corresponding bin.
3247 This code is safe to execute even if av is not yet initialized, so we
3248 can try it without checking, which saves some time on this fast path.
3251 if ((unsigned long)(nb
) <= (unsigned long)(av
->max_fast
)) {
3252 fb
= &(av
->fastbins
[(fastbin_index(nb
))]);
3253 if ( (victim
= *fb
) != 0) {
3255 check_remalloced_chunk(victim
, nb
);
3256 return chunk2mem(victim
);
3261 If a small request, check regular bin. Since these "smallbins"
3262 hold one size each, no searching within bins is necessary.
3263 (For a large request, we need to wait until unsorted chunks are
3264 processed to find best fit. But for small ones, fits are exact
3265 anyway, so we can check now, which is faster.)
3268 if (in_smallbin_range(nb
)) {
3269 idx
= smallbin_index(nb
);
3270 bin
= bin_at(av
,idx
);
3272 if ( (victim
= last(bin
)) != bin
) {
3273 if (victim
== 0) /* initialization check */
3274 malloc_consolidate(av
);
3277 set_inuse_bit_at_offset(victim
, nb
);
3281 check_malloced_chunk(victim
, nb
);
3282 return chunk2mem(victim
);
3288 If this is a large request, consolidate fastbins before continuing.
3289 While it might look excessive to kill all fastbins before
3290 even seeing if there is space available, this avoids
3291 fragmentation problems normally associated with fastbins.
3292 Also, in practice, programs tend to have runs of either small or
3293 large requests, but less often mixtures, so consolidation is not
3294 invoked all that often in most programs. And the programs that
3295 it is called frequently in otherwise tend to fragment.
3299 idx
= largebin_index(nb
);
3300 if (have_fastchunks(av
))
3301 malloc_consolidate(av
);
3305 Process recently freed or remaindered chunks, taking one only if
3306 it is exact fit, or, if this a small request, the chunk is remainder from
3307 the most recent non-exact fit. Place other traversed chunks in
3308 bins. Note that this step is the only place in any routine where
3309 chunks are placed in bins.
3311 The outer loop here is needed because we might not realize until
3312 near the end of malloc that we should have consolidated, so must
3313 do so and retry. This happens at most once, and only when we would
3314 otherwise need to expand memory to service a "small" request.
3319 while ( (victim
= unsorted_chunks(av
)->bk
) != unsorted_chunks(av
)) {
3321 size
= chunksize(victim
);
3324 If a small request, try to use last remainder if it is the
3325 only chunk in unsorted bin. This helps promote locality for
3326 runs of consecutive small requests. This is the only
3327 exception to best-fit, and applies only when there is
3328 no exact fit for a small chunk.
3331 if (in_smallbin_range(nb
) &&
3332 bck
== unsorted_chunks(av
) &&
3333 victim
== av
->last_remainder
&&
3334 (unsigned long)(size
) > (unsigned long)(nb
+ MINSIZE
)) {
3336 /* split and reattach remainder */
3337 remainder_size
= size
- nb
;
3338 remainder
= chunk_at_offset(victim
, nb
);
3339 unsorted_chunks(av
)->bk
= unsorted_chunks(av
)->fd
= remainder
;
3340 av
->last_remainder
= remainder
;
3341 remainder
->bk
= remainder
->fd
= unsorted_chunks(av
);
3343 set_head(victim
, nb
| PREV_INUSE
);
3344 set_head(remainder
, remainder_size
| PREV_INUSE
);
3345 set_foot(remainder
, remainder_size
);
3347 check_malloced_chunk(victim
, nb
);
3348 return chunk2mem(victim
);
3351 /* remove from unsorted list */
3352 unsorted_chunks(av
)->bk
= bck
;
3353 bck
->fd
= unsorted_chunks(av
);
3355 /* Take now instead of binning if exact fit */
3358 set_inuse_bit_at_offset(victim
, size
);
3359 check_malloced_chunk(victim
, nb
);
3360 return chunk2mem(victim
);
3363 /* place chunk in bin */
3365 if (in_smallbin_range(size
)) {
3366 victim_index
= smallbin_index(size
);
3367 bck
= bin_at(av
, victim_index
);
3371 victim_index
= largebin_index(size
);
3372 bck
= bin_at(av
, victim_index
);
3375 /* maintain large bins in sorted order */
3377 size
|= PREV_INUSE
; /* Or with inuse bit to speed comparisons */
3378 /* if smaller than smallest, bypass loop below */
3379 if ((unsigned long)(size
) <= (unsigned long)(bck
->bk
->size
)) {
3384 while ((unsigned long)(size
) < (unsigned long)(fwd
->size
))
3391 mark_bin(av
, victim_index
);
3399 If a large request, scan through the chunks of current bin in
3400 sorted order to find smallest that fits. This is the only step
3401 where an unbounded number of chunks might be scanned without doing
3402 anything useful with them. However the lists tend to be short.
3405 if (!in_smallbin_range(nb
)) {
3406 bin
= bin_at(av
, idx
);
3408 /* skip scan if empty or largest chunk is too small */
3409 if ((victim
= last(bin
)) != bin
&&
3410 (unsigned long)(first(bin
)->size
) >= (unsigned long)(nb
)) {
3412 while (((unsigned long)(size
= chunksize(victim
)) <
3413 (unsigned long)(nb
)))
3414 victim
= victim
->bk
;
3416 remainder_size
= size
- nb
;
3417 unlink(victim
, bck
, fwd
);
3420 if (remainder_size
< MINSIZE
) {
3421 set_inuse_bit_at_offset(victim
, size
);
3422 check_malloced_chunk(victim
, nb
);
3423 return chunk2mem(victim
);
3427 remainder
= chunk_at_offset(victim
, nb
);
3428 unsorted_chunks(av
)->bk
= unsorted_chunks(av
)->fd
= remainder
;
3429 remainder
->bk
= remainder
->fd
= unsorted_chunks(av
);
3430 set_head(victim
, nb
| PREV_INUSE
);
3431 set_head(remainder
, remainder_size
| PREV_INUSE
);
3432 set_foot(remainder
, remainder_size
);
3433 check_malloced_chunk(victim
, nb
);
3434 return chunk2mem(victim
);
3440 Search for a chunk by scanning bins, starting with next largest
3441 bin. This search is strictly by best-fit; i.e., the smallest
3442 (with ties going to approximately the least recently used) chunk
3443 that fits is selected.
3445 The bitmap avoids needing to check that most blocks are nonempty.
3446 The particular case of skipping all bins during warm-up phases
3447 when no chunks have been returned yet is faster than it might look.
3451 bin
= bin_at(av
,idx
);
3452 block
= idx2block(idx
);
3453 map
= av
->binmap
[block
];
3458 /* Skip rest of block if there are no more set bits in this block. */
3459 if (bit
> map
|| bit
== 0) {
3461 if (++block
>= BINMAPSIZE
) /* out of bins */
3463 } while ( (map
= av
->binmap
[block
]) == 0);
3465 bin
= bin_at(av
, (block
<< BINMAPSHIFT
));
3469 /* Advance to bin with set bit. There must be one. */
3470 while ((bit
& map
) == 0) {
3471 bin
= next_bin(bin
);
3476 /* Inspect the bin. It is likely to be non-empty */
3479 /* If a false alarm (empty bin), clear the bit. */
3480 if (victim
== bin
) {
3481 av
->binmap
[block
] = map
&= ~bit
; /* Write through */
3482 bin
= next_bin(bin
);
3487 size
= chunksize(victim
);
3489 /* We know the first chunk in this bin is big enough to use. */
3490 assert((unsigned long)(size
) >= (unsigned long)(nb
));
3492 remainder_size
= size
- nb
;
3500 if (remainder_size
< MINSIZE
) {
3501 set_inuse_bit_at_offset(victim
, size
);
3502 check_malloced_chunk(victim
, nb
);
3503 return chunk2mem(victim
);
3508 remainder
= chunk_at_offset(victim
, nb
);
3510 unsorted_chunks(av
)->bk
= unsorted_chunks(av
)->fd
= remainder
;
3511 remainder
->bk
= remainder
->fd
= unsorted_chunks(av
);
3512 /* advertise as last remainder */
3513 if (in_smallbin_range(nb
))
3514 av
->last_remainder
= remainder
;
3516 set_head(victim
, nb
| PREV_INUSE
);
3517 set_head(remainder
, remainder_size
| PREV_INUSE
);
3518 set_foot(remainder
, remainder_size
);
3519 check_malloced_chunk(victim
, nb
);
3520 return chunk2mem(victim
);
3527 If large enough, split off the chunk bordering the end of memory
3528 (held in av->top). Note that this is in accord with the best-fit
3529 search rule. In effect, av->top is treated as larger (and thus
3530 less well fitting) than any other available chunk since it can
3531 be extended to be as large as necessary (up to system
3534 We require that av->top always exists (i.e., has size >=
3535 MINSIZE) after initialization, so if it would otherwise be
3536 exhuasted by current request, it is replenished. (The main
3537 reason for ensuring it exists is that we may need MINSIZE space
3538 to put in fenceposts in sysmalloc.)
3542 size
= chunksize(victim
);
3544 if ((unsigned long)(size
) >= (unsigned long)(nb
+ MINSIZE
)) {
3545 remainder_size
= size
- nb
;
3546 remainder
= chunk_at_offset(victim
, nb
);
3547 av
->top
= remainder
;
3548 set_head(victim
, nb
| PREV_INUSE
);
3549 set_head(remainder
, remainder_size
| PREV_INUSE
);
3551 check_malloced_chunk(victim
, nb
);
3552 return chunk2mem(victim
);
3556 If there is space available in fastbins, consolidate and retry,
3557 to possibly avoid expanding memory. This can occur only if nb is
3558 in smallbin range so we didn't consolidate upon entry.
3561 else if (have_fastchunks(av
)) {
3562 assert(in_smallbin_range(nb
));
3563 malloc_consolidate(av
);
3564 idx
= smallbin_index(nb
); /* restore original bin index */
3568 Otherwise, relay to handle system-dependent cases
3571 return sYSMALLOc(nb
, av
);
3576 ------------------------------ free ------------------------------
3580 void fREe(Void_t
* mem
)
3582 void fREe(mem
) Void_t
* mem
;
3585 mstate av
= get_malloc_state();
3587 mchunkptr p
; /* chunk corresponding to mem */
3588 INTERNAL_SIZE_T size
; /* its size */
3589 mfastbinptr
* fb
; /* associated fastbin */
3590 mchunkptr nextchunk
; /* next contiguous chunk */
3591 INTERNAL_SIZE_T nextsize
; /* its size */
3592 int nextinuse
; /* true if nextchunk is used */
3593 INTERNAL_SIZE_T prevsize
; /* size of previous contiguous chunk */
3594 mchunkptr bck
; /* misc temp for linking */
3595 mchunkptr fwd
; /* misc temp for linking */
3598 /* free(0) has no effect */
3601 size
= chunksize(p
);
3603 check_inuse_chunk(p
);
3606 If eligible, place chunk on a fastbin so it can be found
3607 and used quickly in malloc.
3610 if ((unsigned long)(size
) <= (unsigned long)(av
->max_fast
)
3614 If TRIM_FASTBINS set, don't place chunks
3615 bordering top into fastbins
3617 && (chunk_at_offset(p
, size
) != av
->top
)
3622 fb
= &(av
->fastbins
[fastbin_index(size
)]);
3628 Consolidate other non-mmapped chunks as they arrive.
3631 else if (!chunk_is_mmapped(p
)) {
3632 nextchunk
= chunk_at_offset(p
, size
);
3633 nextsize
= chunksize(nextchunk
);
3635 /* consolidate backward */
3636 if (!prev_inuse(p
)) {
3637 prevsize
= p
->prev_size
;
3639 p
= chunk_at_offset(p
, -((long) prevsize
));
3640 unlink(p
, bck
, fwd
);
3643 if (nextchunk
!= av
->top
) {
3644 /* get and clear inuse bit */
3645 nextinuse
= inuse_bit_at_offset(nextchunk
, nextsize
);
3646 set_head(nextchunk
, nextsize
);
3648 /* consolidate forward */
3650 unlink(nextchunk
, bck
, fwd
);
3655 Place the chunk in unsorted chunk list. Chunks are
3656 not placed into regular bins until after they have
3657 been given one chance to be used in malloc.
3660 bck
= unsorted_chunks(av
);
3667 set_head(p
, size
| PREV_INUSE
);
3670 check_free_chunk(p
);
3674 If the chunk borders the current high end of memory,
3675 consolidate into top
3680 set_head(p
, size
| PREV_INUSE
);
3686 If freeing a large space, consolidate possibly-surrounding
3687 chunks. Then, if the total unused topmost memory exceeds trim
3688 threshold, ask malloc_trim to reduce top.
3690 Unless max_fast is 0, we don't know if there are fastbins
3691 bordering top, so we cannot tell for sure whether threshold
3692 has been reached unless fastbins are consolidated. But we
3693 don't want to consolidate on each free. As a compromise,
3694 consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
3698 if ((unsigned long)(size
) >= FASTBIN_CONSOLIDATION_THRESHOLD
) {
3699 if (have_fastchunks(av
))
3700 malloc_consolidate(av
);
3702 #ifndef MORECORE_CANNOT_TRIM
3703 if ((unsigned long)(chunksize(av
->top
)) >=
3704 (unsigned long)(av
->trim_threshold
))
3705 sYSTRIm(av
->top_pad
, av
);
3711 If the chunk was allocated via mmap, release via munmap()
3712 Note that if HAVE_MMAP is false but chunk_is_mmapped is
3713 true, then user must have overwritten memory. There's nothing
3714 we can do to catch this error unless DEBUG is set, in which case
3715 check_inuse_chunk (above) will have triggered error.
3721 INTERNAL_SIZE_T offset
= p
->prev_size
;
3723 av
->mmapped_mem
-= (size
+ offset
);
3724 ret
= munmap((char*)p
- offset
, size
+ offset
);
3725 /* munmap returns non-zero on failure */
3733 ------------------------- malloc_consolidate -------------------------
3735 malloc_consolidate is a specialized version of free() that tears
3736 down chunks held in fastbins. Free itself cannot be used for this
3737 purpose since, among other things, it might place chunks back onto
3738 fastbins. So, instead, we need to use a minor variant of the same
3741 Also, because this routine needs to be called the first time through
3742 malloc anyway, it turns out to be the perfect place to trigger
3743 initialization code.
3747 static void malloc_consolidate(mstate av
)
3749 static void malloc_consolidate(av
) mstate av
;
3752 mfastbinptr
* fb
; /* current fastbin being consolidated */
3753 mfastbinptr
* maxfb
; /* last fastbin (for loop control) */
3754 mchunkptr p
; /* current chunk being consolidated */
3755 mchunkptr nextp
; /* next chunk to consolidate */
3756 mchunkptr unsorted_bin
; /* bin header */
3757 mchunkptr first_unsorted
; /* chunk to link to */
3759 /* These have same use as in free() */
3760 mchunkptr nextchunk
;
3761 INTERNAL_SIZE_T size
;
3762 INTERNAL_SIZE_T nextsize
;
3763 INTERNAL_SIZE_T prevsize
;
3769 If max_fast is 0, we know that av hasn't
3770 yet been initialized, in which case do so below
3773 if (av
->max_fast
!= 0) {
3774 clear_fastchunks(av
);
3776 unsorted_bin
= unsorted_chunks(av
);
3779 Remove each chunk from fast bin and consolidate it, placing it
3780 then in unsorted bin. Among other reasons for doing this,
3781 placing in unsorted bin avoids needing to calculate actual bins
3782 until malloc is sure that chunks aren't immediately going to be
3786 maxfb
= &(av
->fastbins
[fastbin_index(av
->max_fast
)]);
3787 fb
= &(av
->fastbins
[0]);
3789 if ( (p
= *fb
) != 0) {
3793 check_inuse_chunk(p
);
3796 /* Slightly streamlined version of consolidation code in free() */
3797 size
= p
->size
& ~PREV_INUSE
;
3798 nextchunk
= chunk_at_offset(p
, size
);
3799 nextsize
= chunksize(nextchunk
);
3801 if (!prev_inuse(p
)) {
3802 prevsize
= p
->prev_size
;
3804 p
= chunk_at_offset(p
, -((long) prevsize
));
3805 unlink(p
, bck
, fwd
);
3808 if (nextchunk
!= av
->top
) {
3809 nextinuse
= inuse_bit_at_offset(nextchunk
, nextsize
);
3810 set_head(nextchunk
, nextsize
);
3814 unlink(nextchunk
, bck
, fwd
);
3817 first_unsorted
= unsorted_bin
->fd
;
3818 unsorted_bin
->fd
= p
;
3819 first_unsorted
->bk
= p
;
3821 set_head(p
, size
| PREV_INUSE
);
3822 p
->bk
= unsorted_bin
;
3823 p
->fd
= first_unsorted
;
3829 set_head(p
, size
| PREV_INUSE
);
3833 } while ( (p
= nextp
) != 0);
3836 } while (fb
++ != maxfb
);
3839 malloc_init_state(av
);
3840 check_malloc_state();
3845 ------------------------------ realloc ------------------------------
3850 Void_t
* rEALLOc(Void_t
* oldmem
, size_t bytes
)
3852 Void_t
* rEALLOc(oldmem
, bytes
) Void_t
* oldmem
; size_t bytes
;
3855 mstate av
= get_malloc_state();
3857 INTERNAL_SIZE_T nb
; /* padded request size */
3859 mchunkptr oldp
; /* chunk corresponding to oldmem */
3860 INTERNAL_SIZE_T oldsize
; /* its size */
3862 mchunkptr newp
; /* chunk to return */
3863 INTERNAL_SIZE_T newsize
; /* its size */
3864 Void_t
* newmem
; /* corresponding user mem */
3866 mchunkptr next
; /* next contiguous chunk after oldp */
3868 mchunkptr remainder
; /* extra space at end of newp */
3869 unsigned long remainder_size
; /* its size */
3871 mchunkptr bck
; /* misc temp for linking */
3872 mchunkptr fwd
; /* misc temp for linking */
3874 unsigned long copysize
; /* bytes to copy */
3875 unsigned int ncopies
; /* INTERNAL_SIZE_T words to copy */
3876 INTERNAL_SIZE_T
* s
; /* copy source */
3877 INTERNAL_SIZE_T
* d
; /* copy destination */
3880 #ifdef REALLOC_ZERO_BYTES_FREES
3887 /* realloc of null is supposed to be same as malloc */
3888 if (oldmem
== 0) return mALLOc(bytes
);
3890 checked_request2size(bytes
, nb
);
3892 oldp
= mem2chunk(oldmem
);
3893 oldsize
= chunksize(oldp
);
3895 check_inuse_chunk(oldp
);
3897 if (!chunk_is_mmapped(oldp
)) {
3899 if ((unsigned long)(oldsize
) >= (unsigned long)(nb
)) {
3900 /* already big enough; split below */
3906 next
= chunk_at_offset(oldp
, oldsize
);
3908 /* Try to expand forward into top */
3909 if (next
== av
->top
&&
3910 (unsigned long)(newsize
= oldsize
+ chunksize(next
)) >=
3911 (unsigned long)(nb
+ MINSIZE
)) {
3912 set_head_size(oldp
, nb
);
3913 av
->top
= chunk_at_offset(oldp
, nb
);
3914 set_head(av
->top
, (newsize
- nb
) | PREV_INUSE
);
3915 return chunk2mem(oldp
);
3918 /* Try to expand forward into next chunk; split off remainder below */
3919 else if (next
!= av
->top
&&
3921 (unsigned long)(newsize
= oldsize
+ chunksize(next
)) >=
3922 (unsigned long)(nb
)) {
3924 unlink(next
, bck
, fwd
);
3927 /* allocate, copy, free */
3929 newmem
= mALLOc(nb
- MALLOC_ALIGN_MASK
);
3931 return 0; /* propagate failure */
3933 newp
= mem2chunk(newmem
);
3934 newsize
= chunksize(newp
);
3937 Avoid copy if newp is next chunk after oldp.
3945 Unroll copy of <= 36 bytes (72 if 8byte sizes)
3946 We know that contents have an odd number of
3947 INTERNAL_SIZE_T-sized words; minimally 3.
3950 copysize
= oldsize
- SIZE_SZ
;
3951 s
= (INTERNAL_SIZE_T
*)(oldmem
);
3952 d
= (INTERNAL_SIZE_T
*)(newmem
);
3953 ncopies
= copysize
/ sizeof(INTERNAL_SIZE_T
);
3954 assert(ncopies
>= 3);
3957 MALLOC_COPY(d
, s
, copysize
);
3978 check_inuse_chunk(newp
);
3979 return chunk2mem(newp
);
3984 /* If possible, free extra space in old or extended chunk */
3986 assert((unsigned long)(newsize
) >= (unsigned long)(nb
));
3988 remainder_size
= newsize
- nb
;
3990 if (remainder_size
< MINSIZE
) { /* not enough extra to split off */
3991 set_head_size(newp
, newsize
);
3992 set_inuse_bit_at_offset(newp
, newsize
);
3994 else { /* split remainder */
3995 remainder
= chunk_at_offset(newp
, nb
);
3996 set_head_size(newp
, nb
);
3997 set_head(remainder
, remainder_size
| PREV_INUSE
);
3998 /* Mark remainder as inuse so free() won't complain */
3999 set_inuse_bit_at_offset(remainder
, remainder_size
);
4000 fREe(chunk2mem(remainder
));
4003 check_inuse_chunk(newp
);
4004 return chunk2mem(newp
);
4015 INTERNAL_SIZE_T offset
= oldp
->prev_size
;
4016 size_t pagemask
= av
->pagesize
- 1;
4020 /* Note the extra SIZE_SZ overhead */
4021 newsize
= (nb
+ offset
+ SIZE_SZ
+ pagemask
) & ~pagemask
;
4023 /* don't need to remap if still within same page */
4024 if (oldsize
== newsize
- offset
)
4027 cp
= (char*)mremap((char*)oldp
- offset
, oldsize
+ offset
, newsize
, 1);
4029 if (cp
!= (char*)MORECORE_FAILURE
) {
4031 newp
= (mchunkptr
)(cp
+ offset
);
4032 set_head(newp
, (newsize
- offset
)|IS_MMAPPED
);
4034 assert(aligned_OK(chunk2mem(newp
)));
4035 assert((newp
->prev_size
== offset
));
4037 /* update statistics */
4038 sum
= av
->mmapped_mem
+= newsize
- oldsize
;
4039 if (sum
> (unsigned long)(av
->max_mmapped_mem
))
4040 av
->max_mmapped_mem
= sum
;
4041 sum
+= av
->sbrked_mem
;
4042 if (sum
> (unsigned long)(av
->max_total_mem
))
4043 av
->max_total_mem
= sum
;
4045 return chunk2mem(newp
);
4049 /* Note the extra SIZE_SZ overhead. */
4050 if ((unsigned long)(oldsize
) >= (unsigned long)(nb
+ SIZE_SZ
))
4051 newmem
= oldmem
; /* do nothing */
4053 /* Must alloc, copy, free. */
4054 newmem
= mALLOc(nb
- MALLOC_ALIGN_MASK
);
4056 MALLOC_COPY(newmem
, oldmem
, oldsize
- 2*SIZE_SZ
);
4063 /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
4064 check_malloc_state();
4065 MALLOC_FAILURE_ACTION
;
4072 ------------------------------ memalign ------------------------------
4076 Void_t
* mEMALIGn(size_t alignment
, size_t bytes
)
4078 Void_t
* mEMALIGn(alignment
, bytes
) size_t alignment
; size_t bytes
;
4081 INTERNAL_SIZE_T nb
; /* padded request size */
4082 char* m
; /* memory returned by malloc call */
4083 mchunkptr p
; /* corresponding chunk */
4084 char* brk
; /* alignment point within p */
4085 mchunkptr newp
; /* chunk to return */
4086 INTERNAL_SIZE_T newsize
; /* its size */
4087 INTERNAL_SIZE_T leadsize
; /* leading space before alignment point */
4088 mchunkptr remainder
; /* spare room at end to split off */
4089 unsigned long remainder_size
; /* its size */
4090 INTERNAL_SIZE_T size
;
4092 /* If need less alignment than we give anyway, just relay to malloc */
4094 if (alignment
<= MALLOC_ALIGNMENT
) return mALLOc(bytes
);
4096 /* Otherwise, ensure that it is at least a minimum chunk size */
4098 if (alignment
< MINSIZE
) alignment
= MINSIZE
;
4100 /* Make sure alignment is power of 2 (in case MINSIZE is not). */
4101 if ((alignment
& (alignment
- 1)) != 0) {
4102 size_t a
= MALLOC_ALIGNMENT
* 2;
4103 while ((unsigned long)a
< (unsigned long)alignment
) a
<<= 1;
4107 checked_request2size(bytes
, nb
);
4110 Strategy: find a spot within that chunk that meets the alignment
4111 request, and then possibly free the leading and trailing space.
4115 /* Call malloc with worst case padding to hit alignment. */
4117 m
= (char*)(mALLOc(nb
+ alignment
+ MINSIZE
));
4119 if (m
== 0) return 0; /* propagate failure */
4123 if ((((unsigned long)(m
)) % alignment
) != 0) { /* misaligned */
4126 Find an aligned spot inside chunk. Since we need to give back
4127 leading space in a chunk of at least MINSIZE, if the first
4128 calculation places us at a spot with less than MINSIZE leader,
4129 we can move to the next aligned spot -- we've allocated enough
4130 total room so that this is always possible.
4133 brk
= (char*)mem2chunk(((unsigned long)(m
+ alignment
- 1)) &
4134 -((signed long) alignment
));
4135 if ((unsigned long)(brk
- (char*)(p
)) < MINSIZE
)
4138 newp
= (mchunkptr
)brk
;
4139 leadsize
= brk
- (char*)(p
);
4140 newsize
= chunksize(p
) - leadsize
;
4142 /* For mmapped chunks, just adjust offset */
4143 if (chunk_is_mmapped(p
)) {
4144 newp
->prev_size
= p
->prev_size
+ leadsize
;
4145 set_head(newp
, newsize
|IS_MMAPPED
);
4146 return chunk2mem(newp
);
4149 /* Otherwise, give back leader, use the rest */
4150 set_head(newp
, newsize
| PREV_INUSE
);
4151 set_inuse_bit_at_offset(newp
, newsize
);
4152 set_head_size(p
, leadsize
);
4156 assert (newsize
>= nb
&&
4157 (((unsigned long)(chunk2mem(p
))) % alignment
) == 0);
4160 /* Also give back spare room at the end */
4161 if (!chunk_is_mmapped(p
)) {
4162 size
= chunksize(p
);
4163 if ((unsigned long)(size
) > (unsigned long)(nb
+ MINSIZE
)) {
4164 remainder_size
= size
- nb
;
4165 remainder
= chunk_at_offset(p
, nb
);
4166 set_head(remainder
, remainder_size
| PREV_INUSE
);
4167 set_head_size(p
, nb
);
4168 fREe(chunk2mem(remainder
));
4172 check_inuse_chunk(p
);
4173 return chunk2mem(p
);
4177 ------------------------------ calloc ------------------------------
4181 Void_t
* cALLOc(size_t n_elements
, size_t elem_size
)
4183 Void_t
* cALLOc(n_elements
, elem_size
) size_t n_elements
; size_t elem_size
;
4187 unsigned long clearsize
;
4188 unsigned long nclears
;
4191 Void_t
* mem
= mALLOc(n_elements
* elem_size
);
4197 if (!chunk_is_mmapped(p
)) /* don't need to clear mmapped space */
4201 Unroll clear of <= 36 bytes (72 if 8byte sizes)
4202 We know that contents have an odd number of
4203 INTERNAL_SIZE_T-sized words; minimally 3.
4206 d
= (INTERNAL_SIZE_T
*)mem
;
4207 clearsize
= chunksize(p
) - SIZE_SZ
;
4208 nclears
= clearsize
/ sizeof(INTERNAL_SIZE_T
);
4209 assert(nclears
>= 3);
4212 MALLOC_ZERO(d
, clearsize
);
4237 ------------------------------ cfree ------------------------------
4241 void cFREe(Void_t
*mem
)
4243 void cFREe(mem
) Void_t
*mem
;
4250 ------------------------- independent_calloc -------------------------
4254 Void_t
** iCALLOc(size_t n_elements
, size_t elem_size
, Void_t
* chunks
[])
4256 Void_t
** iCALLOc(n_elements
, elem_size
, chunks
) size_t n_elements
; size_t elem_size
; Void_t
* chunks
[];
4259 size_t sz
= elem_size
; /* serves as 1-element array */
4260 /* opts arg of 3 means all elements are same size, and should be cleared */
4261 return iALLOc(n_elements
, &sz
, 3, chunks
);
4265 ------------------------- independent_comalloc -------------------------
4269 Void_t
** iCOMALLOc(size_t n_elements
, size_t sizes
[], Void_t
* chunks
[])
4271 Void_t
** iCOMALLOc(n_elements
, sizes
, chunks
) size_t n_elements
; size_t sizes
[]; Void_t
* chunks
[];
4274 return iALLOc(n_elements
, sizes
, 0, chunks
);
4279 ------------------------------ ialloc ------------------------------
4280 ialloc provides common support for independent_X routines, handling all of
4281 the combinations that can result.
4284 bit 0 set if all elements are same size (using sizes[0])
4285 bit 1 set if elements should be zeroed
4290 static Void_t
** iALLOc(size_t n_elements
,
4295 static Void_t
** iALLOc(n_elements
, sizes
, opts
, chunks
) size_t n_elements
; size_t* sizes
; int opts
; Void_t
* chunks
[];
4298 mstate av
= get_malloc_state();
4299 INTERNAL_SIZE_T element_size
; /* chunksize of each element, if all same */
4300 INTERNAL_SIZE_T contents_size
; /* total size of elements */
4301 INTERNAL_SIZE_T array_size
; /* request size of pointer array */
4302 Void_t
* mem
; /* malloced aggregate space */
4303 mchunkptr p
; /* corresponding chunk */
4304 INTERNAL_SIZE_T remainder_size
; /* remaining bytes while splitting */
4305 Void_t
** marray
; /* either "chunks" or malloced ptr array */
4306 mchunkptr array_chunk
; /* chunk for malloced ptr array */
4307 int mmx
; /* to disable mmap */
4308 INTERNAL_SIZE_T size
;
4311 /* Ensure initialization/consolidation */
4312 if (have_fastchunks(av
)) malloc_consolidate(av
);
4314 /* compute array length, if needed */
4316 if (n_elements
== 0)
4317 return chunks
; /* nothing to do */
4322 /* if empty req, must still return chunk representing empty array */
4323 if (n_elements
== 0)
4324 return (Void_t
**) mALLOc(0);
4326 array_size
= request2size(n_elements
* (sizeof(Void_t
*)));
4329 /* compute total element size */
4330 if (opts
& 0x1) { /* all-same-size */
4331 element_size
= request2size(*sizes
);
4332 contents_size
= n_elements
* element_size
;
4334 else { /* add up all the sizes */
4337 for (i
= 0; i
!= n_elements
; ++i
)
4338 contents_size
+= request2size(sizes
[i
]);
4341 /* subtract out alignment bytes from total to minimize overallocation */
4342 size
= contents_size
+ array_size
- MALLOC_ALIGN_MASK
;
4345 Allocate the aggregate chunk.
4346 But first disable mmap so malloc won't use it, since
4347 we would not be able to later free/realloc space internal
4348 to a segregated mmap region.
4350 mmx
= av
->n_mmaps_max
; /* disable mmap */
4351 av
->n_mmaps_max
= 0;
4353 av
->n_mmaps_max
= mmx
; /* reset mmap */
4358 assert(!chunk_is_mmapped(p
));
4359 remainder_size
= chunksize(p
);
4361 if (opts
& 0x2) { /* optionally clear the elements */
4362 MALLOC_ZERO(mem
, remainder_size
- SIZE_SZ
- array_size
);
4365 /* If not provided, allocate the pointer array as final part of chunk */
4367 array_chunk
= chunk_at_offset(p
, contents_size
);
4368 marray
= (Void_t
**) (chunk2mem(array_chunk
));
4369 set_head(array_chunk
, (remainder_size
- contents_size
) | PREV_INUSE
);
4370 remainder_size
= contents_size
;
4373 /* split out elements */
4374 for (i
= 0; ; ++i
) {
4375 marray
[i
] = chunk2mem(p
);
4376 if (i
!= n_elements
-1) {
4377 if (element_size
!= 0)
4378 size
= element_size
;
4380 size
= request2size(sizes
[i
]);
4381 remainder_size
-= size
;
4382 set_head(p
, size
| PREV_INUSE
);
4383 p
= chunk_at_offset(p
, size
);
4385 else { /* the final element absorbs any overallocation slop */
4386 set_head(p
, remainder_size
| PREV_INUSE
);
4392 if (marray
!= chunks
) {
4393 /* final element must have exactly exhausted chunk */
4394 if (element_size
!= 0)
4395 assert(remainder_size
== element_size
);
4397 assert(remainder_size
== request2size(sizes
[i
]));
4398 check_inuse_chunk(mem2chunk(marray
));
4401 for (i
= 0; i
!= n_elements
; ++i
)
4402 check_inuse_chunk(mem2chunk(marray
[i
]));
4410 ------------------------------ valloc ------------------------------
4414 Void_t
* vALLOc(size_t bytes
)
4416 Void_t
* vALLOc(bytes
) size_t bytes
;
4419 /* Ensure initialization/consolidation */
4420 mstate av
= get_malloc_state();
4421 if (have_fastchunks(av
)) malloc_consolidate(av
);
4422 return mEMALIGn(av
->pagesize
, bytes
);
4426 ------------------------------ pvalloc ------------------------------
4431 Void_t
* pVALLOc(size_t bytes
)
4433 Void_t
* pVALLOc(bytes
) size_t bytes
;
4436 mstate av
= get_malloc_state();
4439 /* Ensure initialization/consolidation */
4440 if (have_fastchunks(av
)) malloc_consolidate(av
);
4441 pagesz
= av
->pagesize
;
4442 return mEMALIGn(pagesz
, (bytes
+ pagesz
- 1) & ~(pagesz
- 1));
4447 ------------------------------ malloc_trim ------------------------------
4451 int mTRIm(size_t pad
)
4453 int mTRIm(pad
) size_t pad
;
4456 mstate av
= get_malloc_state();
4457 /* Ensure initialization/consolidation */
4458 malloc_consolidate(av
);
4460 #ifndef MORECORE_CANNOT_TRIM
4461 return sYSTRIm(pad
, av
);
4469 ------------------------- malloc_usable_size -------------------------
4473 size_t mUSABLe(Void_t
* mem
)
4475 size_t mUSABLe(mem
) Void_t
* mem
;
4481 if (chunk_is_mmapped(p
))
4482 return chunksize(p
) - 2*SIZE_SZ
;
4484 return chunksize(p
) - SIZE_SZ
;
4490 ------------------------------ mallinfo ------------------------------
4493 struct mallinfo
mALLINFo()
4495 mstate av
= get_malloc_state();
4500 INTERNAL_SIZE_T avail
;
4501 INTERNAL_SIZE_T fastavail
;
4505 /* Ensure initialization */
4506 if (av
->top
== 0) malloc_consolidate(av
);
4508 check_malloc_state();
4510 /* Account for top */
4511 avail
= chunksize(av
->top
);
4512 nblocks
= 1; /* top always exists */
4514 /* traverse fastbins */
4518 for (i
= 0; i
< NFASTBINS
; ++i
) {
4519 for (p
= av
->fastbins
[i
]; p
!= 0; p
= p
->fd
) {
4521 fastavail
+= chunksize(p
);
4527 /* traverse regular bins */
4528 for (i
= 1; i
< NBINS
; ++i
) {
4530 for (p
= last(b
); p
!= b
; p
= p
->bk
) {
4532 avail
+= chunksize(p
);
4536 mi
.smblks
= nfastblocks
;
4537 mi
.ordblks
= nblocks
;
4538 mi
.fordblks
= avail
;
4539 mi
.uordblks
= av
->sbrked_mem
- avail
;
4540 mi
.arena
= av
->sbrked_mem
;
4541 mi
.hblks
= av
->n_mmaps
;
4542 mi
.hblkhd
= av
->mmapped_mem
;
4543 mi
.fsmblks
= fastavail
;
4544 mi
.keepcost
= chunksize(av
->top
);
4545 mi
.usmblks
= av
->max_total_mem
;
4550 ------------------------------ malloc_stats ------------------------------
4555 struct mallinfo mi
= mALLINFo();
4559 unsigned long free
, reserved
, committed
;
4560 vminfo (&free
, &reserved
, &committed
);
4561 fprintf(stderr
, "free bytes = %10lu\n",
4563 fprintf(stderr
, "reserved bytes = %10lu\n",
4565 fprintf(stderr
, "committed bytes = %10lu\n",
4571 fprintf(stderr
, "max system bytes = %10lu\n",
4572 (unsigned long)(mi
.usmblks
));
4573 fprintf(stderr
, "system bytes = %10lu\n",
4574 (unsigned long)(mi
.arena
+ mi
.hblkhd
));
4575 fprintf(stderr
, "in use bytes = %10lu\n",
4576 (unsigned long)(mi
.uordblks
+ mi
.hblkhd
));
4581 unsigned long kernel
, user
;
4582 if (cpuinfo (TRUE
, &kernel
, &user
)) {
4583 fprintf(stderr
, "kernel ms = %10lu\n",
4585 fprintf(stderr
, "user ms = %10lu\n",
4594 ------------------------------ mallopt ------------------------------
4598 int mALLOPt(int param_number
, int value
)
4600 int mALLOPt(param_number
, value
) int param_number
; int value
;
4603 mstate av
= get_malloc_state();
4604 /* Ensure initialization/consolidation */
4605 malloc_consolidate(av
);
4607 switch(param_number
) {
4609 if (value
>= 0 && value
<= MAX_FAST_SIZE
) {
4610 set_max_fast(av
, value
);
4616 case M_TRIM_THRESHOLD
:
4617 av
->trim_threshold
= value
;
4621 av
->top_pad
= value
;
4624 case M_MMAP_THRESHOLD
:
4625 av
->mmap_threshold
= value
;
4633 av
->n_mmaps_max
= value
;
4643 -------------------- Alternative MORECORE functions --------------------
4648 General Requirements for MORECORE.
4650 The MORECORE function must have the following properties:
4652 If MORECORE_CONTIGUOUS is false:
4654 * MORECORE must allocate in multiples of pagesize. It will
4655 only be called with arguments that are multiples of pagesize.
4657 * MORECORE(0) must return an address that is at least
4658 MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
4660 else (i.e. If MORECORE_CONTIGUOUS is true):
4662 * Consecutive calls to MORECORE with positive arguments
4663 return increasing addresses, indicating that space has been
4664 contiguously extended.
4666 * MORECORE need not allocate in multiples of pagesize.
4667 Calls to MORECORE need not have args of multiples of pagesize.
4669 * MORECORE need not page-align.
4673 * MORECORE may allocate more memory than requested. (Or even less,
4674 but this will generally result in a malloc failure.)
4676 * MORECORE must not allocate memory when given argument zero, but
4677 instead return one past the end address of memory from previous
4678 nonzero call. This malloc does NOT call MORECORE(0)
4679 until at least one call with positive arguments is made, so
4680 the initial value returned is not important.
4682 * Even though consecutive calls to MORECORE need not return contiguous
4683 addresses, it must be OK for malloc'ed chunks to span multiple
4684 regions in those cases where they do happen to be contiguous.
4686 * MORECORE need not handle negative arguments -- it may instead
4687 just return MORECORE_FAILURE when given negative arguments.
4688 Negative arguments are always multiples of pagesize. MORECORE
4689 must not misinterpret negative args as large positive unsigned
4690 args. You can suppress all such calls from even occurring by defining
4691 MORECORE_CANNOT_TRIM,
4693 There is some variation across systems about the type of the
4694 argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
4695 actually be size_t, because sbrk supports negative args, so it is
4696 normally the signed type of the same width as size_t (sometimes
4697 declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much
4698 matter though. Internally, we use "long" as arguments, which should
4699 work across all reasonable possibilities.
4701 Additionally, if MORECORE ever returns failure for a positive
4702 request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
4703 system allocator. This is a useful backup strategy for systems with
4704 holes in address spaces -- in this case sbrk cannot contiguously
4705 expand the heap, but mmap may be able to map noncontiguous space.
4707 If you'd like mmap to ALWAYS be used, you can define MORECORE to be
4708 a function that always returns MORECORE_FAILURE.
4710 If you are using this malloc with something other than sbrk (or its
4711 emulation) to supply memory regions, you probably want to set
4712 MORECORE_CONTIGUOUS as false. As an example, here is a custom
4713 allocator kindly contributed for pre-OSX macOS. It uses virtually
4714 but not necessarily physically contiguous non-paged memory (locked
4715 in, present and won't get swapped out). You can use it by
4716 uncommenting this section, adding some #includes, and setting up the
4717 appropriate defines above:
4719 #define MORECORE osMoreCore
4720 #define MORECORE_CONTIGUOUS 0
4722 There is also a shutdown routine that should somehow be called for
4723 cleanup upon program exit.
4725 #define MAX_POOL_ENTRIES 100
4726 #define MINIMUM_MORECORE_SIZE (64 * 1024)
4727 static int next_os_pool;
4728 void *our_os_pools[MAX_POOL_ENTRIES];
4730 void *osMoreCore(int size)
4733 static void *sbrk_top = 0;
4737 if (size < MINIMUM_MORECORE_SIZE)
4738 size = MINIMUM_MORECORE_SIZE;
4739 if (CurrentExecutionLevel() == kTaskLevel)
4740 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4743 return (void *) MORECORE_FAILURE;
4745 // save ptrs so they can be freed during cleanup
4746 our_os_pools[next_os_pool] = ptr;
4748 ptr = (void *) ((((unsigned long) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4749 sbrk_top = (char *) ptr + size;
4754 // we don't currently support shrink behavior
4755 return (void *) MORECORE_FAILURE;
4763 // cleanup any allocated memory pools
4764 // called as last thing before shutting down driver
4766 void osCleanupMem(void)
4770 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4773 PoolDeallocate(*ptr);
4782 --------------------------------------------------------------
4784 Emulation of sbrk for win32.
4785 Donated by J. Walter <Walter@GeNeSys-e.de>.
4786 For additional information about this code, and malloc on Win32, see
4787 http://www.genesys-e.de/jwalter/
4798 /* Support for USE_MALLOC_LOCK */
4799 #ifdef USE_MALLOC_LOCK
4801 /* Wait for spin lock */
4802 static int slwait (int *sl
) {
4803 while (InterlockedCompareExchange ((void **) sl
, (void *) 1, (void *) 0) != 0)
4808 /* Release spin lock */
4809 static int slrelease (int *sl
) {
4810 InterlockedExchange (sl
, 0);
4815 /* Spin lock for emulation code */
4819 #endif /* USE_MALLOC_LOCK */
4821 /* getpagesize for windows */
4822 static long getpagesize (void) {
4823 static long g_pagesize
= 0;
4825 SYSTEM_INFO system_info
;
4826 GetSystemInfo (&system_info
);
4827 g_pagesize
= system_info
.dwPageSize
;
4831 static long getregionsize (void) {
4832 static long g_regionsize
= 0;
4833 if (! g_regionsize
) {
4834 SYSTEM_INFO system_info
;
4835 GetSystemInfo (&system_info
);
4836 g_regionsize
= system_info
.dwAllocationGranularity
;
4838 return g_regionsize
;
4841 /* A region list entry */
4842 typedef struct _region_list_entry
{
4843 void *top_allocated
;
4844 void *top_committed
;
4847 struct _region_list_entry
*previous
;
4848 } region_list_entry
;
4850 /* Allocate and link a region entry in the region list */
4851 static int region_list_append (region_list_entry
**last
, void *base_reserved
, long reserve_size
) {
4852 region_list_entry
*next
= HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry
));
4855 next
->top_allocated
= (char *) base_reserved
;
4856 next
->top_committed
= (char *) base_reserved
;
4857 next
->top_reserved
= (char *) base_reserved
+ reserve_size
;
4858 next
->reserve_size
= reserve_size
;
4859 next
->previous
= *last
;
4863 /* Free and unlink the last region entry from the region list */
4864 static int region_list_remove (region_list_entry
**last
) {
4865 region_list_entry
*previous
= (*last
)->previous
;
4866 if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry
), *last
))
4872 #define CEIL(size,to) (((size)+(to)-1)&~((to)-1))
4873 #define FLOOR(size,to) ((size)&~((to)-1))
4875 #define SBRK_SCALE 0
4876 /* #define SBRK_SCALE 1 */
4877 /* #define SBRK_SCALE 2 */
4878 /* #define SBRK_SCALE 4 */
4880 /* sbrk for windows */
4881 static void *sbrk (long size
) {
4882 static long g_pagesize
, g_my_pagesize
;
4883 static long g_regionsize
, g_my_regionsize
;
4884 static region_list_entry
*g_last
;
4885 void *result
= (void *) MORECORE_FAILURE
;
4887 printf ("sbrk %ld\n", size
);
4889 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
4890 /* Wait for spin lock */
4893 /* First time initialization */
4895 g_pagesize
= getpagesize ();
4896 g_my_pagesize
= g_pagesize
<< SBRK_SCALE
;
4898 if (! g_regionsize
) {
4899 g_regionsize
= getregionsize ();
4900 g_my_regionsize
= g_regionsize
<< SBRK_SCALE
;
4903 if (! region_list_append (&g_last
, 0, 0))
4906 /* Assert invariants */
4908 assert ((char *) g_last
->top_reserved
- g_last
->reserve_size
<= (char *) g_last
->top_allocated
&&
4909 g_last
->top_allocated
<= g_last
->top_committed
);
4910 assert ((char *) g_last
->top_reserved
- g_last
->reserve_size
<= (char *) g_last
->top_committed
&&
4911 g_last
->top_committed
<= g_last
->top_reserved
&&
4912 (unsigned) g_last
->top_committed
% g_pagesize
== 0);
4913 assert ((unsigned) g_last
->top_reserved
% g_regionsize
== 0);
4914 assert ((unsigned) g_last
->reserve_size
% g_regionsize
== 0);
4915 /* Allocation requested? */
4917 /* Allocation size is the requested size */
4918 long allocate_size
= size
;
4919 /* Compute the size to commit */
4920 long to_commit
= (char *) g_last
->top_allocated
+ allocate_size
- (char *) g_last
->top_committed
;
4921 /* Do we reach the commit limit? */
4922 if (to_commit
> 0) {
4923 /* Round size to commit */
4924 long commit_size
= CEIL (to_commit
, g_my_pagesize
);
4925 /* Compute the size to reserve */
4926 long to_reserve
= (char *) g_last
->top_committed
+ commit_size
- (char *) g_last
->top_reserved
;
4927 /* Do we reach the reserve limit? */
4928 if (to_reserve
> 0) {
4929 /* Compute the remaining size to commit in the current region */
4930 long remaining_commit_size
= (char *) g_last
->top_reserved
- (char *) g_last
->top_committed
;
4931 if (remaining_commit_size
> 0) {
4932 /* Assert preconditions */
4933 assert ((unsigned) g_last
->top_committed
% g_pagesize
== 0);
4934 assert (0 < remaining_commit_size
&& remaining_commit_size
% g_pagesize
== 0); {
4936 void *base_committed
= VirtualAlloc (g_last
->top_committed
, remaining_commit_size
,
4937 MEM_COMMIT
, PAGE_READWRITE
);
4938 /* Check returned pointer for consistency */
4939 if (base_committed
!= g_last
->top_committed
)
4941 /* Assert postconditions */
4942 assert ((unsigned) base_committed
% g_pagesize
== 0);
4944 printf ("Commit %p %ld\n", base_committed
, remaining_commit_size
);
4946 /* Adjust the regions commit top */
4947 g_last
->top_committed
= (char *) base_committed
+ remaining_commit_size
;
4950 /* Now we are going to search and reserve. */
4951 int contiguous
= -1;
4953 MEMORY_BASIC_INFORMATION memory_info
;
4954 void *base_reserved
;
4957 /* Assume contiguous memory */
4959 /* Round size to reserve */
4960 reserve_size
= CEIL (to_reserve
, g_my_regionsize
);
4961 /* Start with the current region's top */
4962 memory_info
.BaseAddress
= g_last
->top_reserved
;
4963 /* Assert preconditions */
4964 assert ((unsigned) memory_info
.BaseAddress
% g_pagesize
== 0);
4965 assert (0 < reserve_size
&& reserve_size
% g_regionsize
== 0);
4966 while (VirtualQuery (memory_info
.BaseAddress
, &memory_info
, sizeof (memory_info
))) {
4967 /* Assert postconditions */
4968 assert ((unsigned) memory_info
.BaseAddress
% g_pagesize
== 0);
4970 printf ("Query %p %ld %s\n", memory_info
.BaseAddress
, memory_info
.RegionSize
,
4971 memory_info
.State
== MEM_FREE
? "FREE":
4972 (memory_info
.State
== MEM_RESERVE
? "RESERVED":
4973 (memory_info
.State
== MEM_COMMIT
? "COMMITTED": "?")));
4975 /* Region is free, well aligned and big enough: we are done */
4976 if (memory_info
.State
== MEM_FREE
&&
4977 (unsigned) memory_info
.BaseAddress
% g_regionsize
== 0 &&
4978 memory_info
.RegionSize
>= (unsigned) reserve_size
) {
4982 /* From now on we can't get contiguous memory! */
4984 /* Recompute size to reserve */
4985 reserve_size
= CEIL (allocate_size
, g_my_regionsize
);
4986 memory_info
.BaseAddress
= (char *) memory_info
.BaseAddress
+ memory_info
.RegionSize
;
4987 /* Assert preconditions */
4988 assert ((unsigned) memory_info
.BaseAddress
% g_pagesize
== 0);
4989 assert (0 < reserve_size
&& reserve_size
% g_regionsize
== 0);
4991 /* Search failed? */
4994 /* Assert preconditions */
4995 assert ((unsigned) memory_info
.BaseAddress
% g_regionsize
== 0);
4996 assert (0 < reserve_size
&& reserve_size
% g_regionsize
== 0);
4997 /* Try to reserve this */
4998 base_reserved
= VirtualAlloc (memory_info
.BaseAddress
, reserve_size
,
4999 MEM_RESERVE
, PAGE_NOACCESS
);
5000 if (! base_reserved
) {
5001 int rc
= GetLastError ();
5002 if (rc
!= ERROR_INVALID_ADDRESS
)
5005 /* A null pointer signals (hopefully) a race condition with another thread. */
5006 /* In this case, we try again. */
5007 } while (! base_reserved
);
5008 /* Check returned pointer for consistency */
5009 if (memory_info
.BaseAddress
&& base_reserved
!= memory_info
.BaseAddress
)
5011 /* Assert postconditions */
5012 assert ((unsigned) base_reserved
% g_regionsize
== 0);
5014 printf ("Reserve %p %ld\n", base_reserved
, reserve_size
);
5016 /* Did we get contiguous memory? */
5018 long start_size
= (char *) g_last
->top_committed
- (char *) g_last
->top_allocated
;
5019 /* Adjust allocation size */
5020 allocate_size
-= start_size
;
5021 /* Adjust the regions allocation top */
5022 g_last
->top_allocated
= g_last
->top_committed
;
5023 /* Recompute the size to commit */
5024 to_commit
= (char *) g_last
->top_allocated
+ allocate_size
- (char *) g_last
->top_committed
;
5025 /* Round size to commit */
5026 commit_size
= CEIL (to_commit
, g_my_pagesize
);
5028 /* Append the new region to the list */
5029 if (! region_list_append (&g_last
, base_reserved
, reserve_size
))
5031 /* Didn't we get contiguous memory? */
5033 /* Recompute the size to commit */
5034 to_commit
= (char *) g_last
->top_allocated
+ allocate_size
- (char *) g_last
->top_committed
;
5035 /* Round size to commit */
5036 commit_size
= CEIL (to_commit
, g_my_pagesize
);
5040 /* Assert preconditions */
5041 assert ((unsigned) g_last
->top_committed
% g_pagesize
== 0);
5042 assert (0 < commit_size
&& commit_size
% g_pagesize
== 0); {
5044 void *base_committed
= VirtualAlloc (g_last
->top_committed
, commit_size
,
5045 MEM_COMMIT
, PAGE_READWRITE
);
5046 /* Check returned pointer for consistency */
5047 if (base_committed
!= g_last
->top_committed
)
5049 /* Assert postconditions */
5050 assert ((unsigned) base_committed
% g_pagesize
== 0);
5052 printf ("Commit %p %ld\n", base_committed
, commit_size
);
5054 /* Adjust the regions commit top */
5055 g_last
->top_committed
= (char *) base_committed
+ commit_size
;
5058 /* Adjust the regions allocation top */
5059 g_last
->top_allocated
= (char *) g_last
->top_allocated
+ allocate_size
;
5060 result
= (char *) g_last
->top_allocated
- size
;
5061 /* Deallocation requested? */
5062 } else if (size
< 0) {
5063 long deallocate_size
= - size
;
5064 /* As long as we have a region to release */
5065 while ((char *) g_last
->top_allocated
- deallocate_size
< (char *) g_last
->top_reserved
- g_last
->reserve_size
) {
5066 /* Get the size to release */
5067 long release_size
= g_last
->reserve_size
;
5068 /* Get the base address */
5069 void *base_reserved
= (char *) g_last
->top_reserved
- release_size
;
5070 /* Assert preconditions */
5071 assert ((unsigned) base_reserved
% g_regionsize
== 0);
5072 assert (0 < release_size
&& release_size
% g_regionsize
== 0); {
5074 int rc
= VirtualFree (base_reserved
, 0,
5076 /* Check returned code for consistency */
5080 printf ("Release %p %ld\n", base_reserved
, release_size
);
5083 /* Adjust deallocation size */
5084 deallocate_size
-= (char *) g_last
->top_allocated
- (char *) base_reserved
;
5085 /* Remove the old region from the list */
5086 if (! region_list_remove (&g_last
))
5089 /* Compute the size to decommit */
5090 long to_decommit
= (char *) g_last
->top_committed
- ((char *) g_last
->top_allocated
- deallocate_size
);
5091 if (to_decommit
>= g_my_pagesize
) {
5092 /* Compute the size to decommit */
5093 long decommit_size
= FLOOR (to_decommit
, g_my_pagesize
);
5094 /* Compute the base address */
5095 void *base_committed
= (char *) g_last
->top_committed
- decommit_size
;
5096 /* Assert preconditions */
5097 assert ((unsigned) base_committed
% g_pagesize
== 0);
5098 assert (0 < decommit_size
&& decommit_size
% g_pagesize
== 0); {
5100 int rc
= VirtualFree ((char *) base_committed
, decommit_size
,
5102 /* Check returned code for consistency */
5106 printf ("Decommit %p %ld\n", base_committed
, decommit_size
);
5109 /* Adjust deallocation size and regions commit and allocate top */
5110 deallocate_size
-= (char *) g_last
->top_allocated
- (char *) base_committed
;
5111 g_last
->top_committed
= base_committed
;
5112 g_last
->top_allocated
= base_committed
;
5115 /* Adjust regions allocate top */
5116 g_last
->top_allocated
= (char *) g_last
->top_allocated
- deallocate_size
;
5117 /* Check for underflow */
5118 if ((char *) g_last
->top_reserved
- g_last
->reserve_size
> (char *) g_last
->top_allocated
||
5119 g_last
->top_allocated
> g_last
->top_committed
) {
5120 /* Adjust regions allocate top */
5121 g_last
->top_allocated
= (char *) g_last
->top_reserved
- g_last
->reserve_size
;
5124 result
= g_last
->top_allocated
;
5126 /* Assert invariants */
5128 assert ((char *) g_last
->top_reserved
- g_last
->reserve_size
<= (char *) g_last
->top_allocated
&&
5129 g_last
->top_allocated
<= g_last
->top_committed
);
5130 assert ((char *) g_last
->top_reserved
- g_last
->reserve_size
<= (char *) g_last
->top_committed
&&
5131 g_last
->top_committed
<= g_last
->top_reserved
&&
5132 (unsigned) g_last
->top_committed
% g_pagesize
== 0);
5133 assert ((unsigned) g_last
->top_reserved
% g_regionsize
== 0);
5134 assert ((unsigned) g_last
->reserve_size
% g_regionsize
== 0);
5137 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5138 /* Release spin lock */
5145 /* mmap for windows */
5146 static void *mmap (void *ptr
, long size
, long prot
, long type
, long handle
, long arg
) {
5147 static long g_pagesize
;
5148 static long g_regionsize
;
5150 printf ("mmap %ld\n", size
);
5152 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5153 /* Wait for spin lock */
5156 /* First time initialization */
5158 g_pagesize
= getpagesize ();
5160 g_regionsize
= getregionsize ();
5161 /* Assert preconditions */
5162 assert ((unsigned) ptr
% g_regionsize
== 0);
5163 assert (size
% g_pagesize
== 0);
5165 ptr
= VirtualAlloc (ptr
, size
,
5166 MEM_RESERVE
| MEM_COMMIT
| MEM_TOP_DOWN
, PAGE_READWRITE
);
5168 ptr
= (void *) MORECORE_FAILURE
;
5171 /* Assert postconditions */
5172 assert ((unsigned) ptr
% g_regionsize
== 0);
5174 printf ("Commit %p %ld\n", ptr
, size
);
5177 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5178 /* Release spin lock */
5184 /* munmap for windows */
5185 static long munmap (void *ptr
, long size
) {
5186 static long g_pagesize
;
5187 static long g_regionsize
;
5188 int rc
= MUNMAP_FAILURE
;
5190 printf ("munmap %p %ld\n", ptr
, size
);
5192 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5193 /* Wait for spin lock */
5196 /* First time initialization */
5198 g_pagesize
= getpagesize ();
5200 g_regionsize
= getregionsize ();
5201 /* Assert preconditions */
5202 assert ((unsigned) ptr
% g_regionsize
== 0);
5203 assert (size
% g_pagesize
== 0);
5205 if (! VirtualFree (ptr
, 0,
5210 printf ("Release %p %ld\n", ptr
, size
);
5213 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5214 /* Release spin lock */
5222 static void vminfo (unsigned long *free
, unsigned long *reserved
, unsigned long *committed
) {
5223 MEMORY_BASIC_INFORMATION memory_info
;
5224 memory_info
.BaseAddress
= 0;
5225 *free
= *reserved
= *committed
= 0;
5226 while (VirtualQuery (memory_info
.BaseAddress
, &memory_info
, sizeof (memory_info
))) {
5227 switch (memory_info
.State
) {
5229 *free
+= memory_info
.RegionSize
;
5232 *reserved
+= memory_info
.RegionSize
;
5235 *committed
+= memory_info
.RegionSize
;
5238 memory_info
.BaseAddress
= (char *) memory_info
.BaseAddress
+ memory_info
.RegionSize
;
5242 static int cpuinfo (int whole
, unsigned long *kernel
, unsigned long *user
) {
5244 __int64 creation64
, exit64
, kernel64
, user64
;
5245 int rc
= GetProcessTimes (GetCurrentProcess (),
5246 (FILETIME
*) &creation64
,
5247 (FILETIME
*) &exit64
,
5248 (FILETIME
*) &kernel64
,
5249 (FILETIME
*) &user64
);
5255 *kernel
= (unsigned long) (kernel64
/ 10000);
5256 *user
= (unsigned long) (user64
/ 10000);
5259 __int64 creation64
, exit64
, kernel64
, user64
;
5260 int rc
= GetThreadTimes (GetCurrentThread (),
5261 (FILETIME
*) &creation64
,
5262 (FILETIME
*) &exit64
,
5263 (FILETIME
*) &kernel64
,
5264 (FILETIME
*) &user64
);
5270 *kernel
= (unsigned long) (kernel64
/ 10000);
5271 *user
= (unsigned long) (user64
/ 10000);
5278 /* ------------------------------------------------------------
5281 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5282 * Introduce independent_comalloc and independent_calloc.
5283 Thanks to Michael Pachos for motivation and help.
5284 * Make optional .h file available
5285 * Allow > 2GB requests on 32bit systems.
5286 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5287 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5289 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5291 * memalign: check alignment arg
5292 * realloc: don't try to shift chunks backwards, since this
5293 leads to more fragmentation in some programs and doesn't
5294 seem to help in any others.
5295 * Collect all cases in malloc requiring system memory into sYSMALLOc
5296 * Use mmap as backup to sbrk
5297 * Place all internal state in malloc_state
5298 * Introduce fastbins (although similar to 2.5.1)
5299 * Many minor tunings and cosmetic improvements
5300 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5301 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5302 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5303 * Include errno.h to support default failure action.
5305 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5306 * return null for negative arguments
5307 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5308 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5309 (e.g. WIN32 platforms)
5310 * Cleanup header file inclusion for WIN32 platforms
5311 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5312 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5313 memory allocation routines
5314 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5315 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5316 usage of 'assert' in non-WIN32 code
5317 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5319 * Always call 'fREe()' rather than 'free()'
5321 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5322 * Fixed ordering problem with boundary-stamping
5324 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5325 * Added pvalloc, as recommended by H.J. Liu
5326 * Added 64bit pointer support mainly from Wolfram Gloger
5327 * Added anonymously donated WIN32 sbrk emulation
5328 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5329 * malloc_extend_top: fix mask error that caused wastage after
5331 * Add linux mremap support code from HJ Liu
5333 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5334 * Integrated most documentation with the code.
5335 * Add support for mmap, with help from
5336 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5337 * Use last_remainder in more cases.
5338 * Pack bins using idea from colin@nyx10.cs.du.edu
5339 * Use ordered bins instead of best-fit threshhold
5340 * Eliminate block-local decls to simplify tracing and debugging.
5341 * Support another case of realloc via move into top
5342 * Fix error occuring when initial sbrk_base not word-aligned.
5343 * Rely on page size for units instead of SBRK_UNIT to
5344 avoid surprises about sbrk alignment conventions.
5345 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5346 (raymond@es.ele.tue.nl) for the suggestion.
5347 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5348 * More precautions for cases where other routines call sbrk,
5349 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5350 * Added macros etc., allowing use in linux libc from
5351 H.J. Lu (hjl@gnu.ai.mit.edu)
5352 * Inverted this history list
5354 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5355 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5356 * Removed all preallocation code since under current scheme
5357 the work required to undo bad preallocations exceeds
5358 the work saved in good cases for most test programs.
5359 * No longer use return list or unconsolidated bins since
5360 no scheme using them consistently outperforms those that don't
5361 given above changes.
5362 * Use best fit for very large chunks to prevent some worst-cases.
5363 * Added some support for debugging
5365 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5366 * Removed footers when chunks are in use. Thanks to
5367 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5369 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5370 * Added malloc_trim, with help from Wolfram Gloger
5371 (wmglo@Dent.MED.Uni-Muenchen.DE).
5373 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5375 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5376 * realloc: try to expand in both directions
5377 * malloc: swap order of clean-bin strategy;
5378 * realloc: only conditionally expand backwards
5379 * Try not to scavenge used bins
5380 * Use bin counts as a guide to preallocation
5381 * Occasionally bin return list chunks in first scan
5382 * Add a few optimizations from colin@nyx10.cs.du.edu
5384 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5385 * faster bin computation & slightly different binning
5386 * merged all consolidations to one part of malloc proper
5387 (eliminating old malloc_find_space & malloc_clean_bin)
5388 * Scan 2 returns chunks (not just 1)
5389 * Propagate failure in realloc if malloc returns 0
5390 * Add stuff to allow compilation on non-ANSI compilers
5391 from kpv@research.att.com
5393 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5394 * removed potential for odd address access in prev_chunk
5395 * removed dependency on getpagesize.h
5396 * misc cosmetics and a bit more internal documentation
5397 * anticosmetics: mangled names in macros to evade debugger strangeness
5398 * tested on sparc, hp-700, dec-mips, rs6000
5399 with gcc & native cc (hp, dec only) allowing
5400 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5402 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5403 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5404 structure of old version, but most details differ.)