]>
Commit | Line | Data |
---|---|---|
5e0464a1 RC |
1 | /* |
2 | This is a version (aka dlmalloc) of malloc/free/realloc written by | |
3 | Doug Lea and released to the public domain. Use, modify, and | |
4 | redistribute this code without permission or acknowledgement in any | |
5 | way you wish. Send questions, comments, complaints, performance | |
6 | data, etc to dl@cs.oswego.edu | |
7 | ||
8 | * VERSION 2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) | |
9 | ||
10 | Note: There may be an updated version of this malloc obtainable at | |
11 | ftp://gee.cs.oswego.edu/pub/misc/malloc.c | |
12 | Check before installing! | |
13 | ||
14 | * Quickstart | |
15 | ||
16 | This library is all in one file to simplify the most common usage: | |
17 | ftp it, compile it (-O), and link it into another program. All | |
18 | of the compile-time options default to reasonable values for use on | |
19 | most unix platforms. Compile -DWIN32 for reasonable defaults on windows. | |
20 | You might later want to step through various compile-time and dynamic | |
21 | tuning options. | |
22 | ||
23 | For convenience, an include file for code using this malloc is at: | |
24 | ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.0.h | |
25 | You don't really need this .h file unless you call functions not | |
26 | defined in your system include files. The .h file contains only the | |
27 | excerpts from this file needed for using this malloc on ANSI C/C++ | |
28 | systems, so long as you haven't changed compile-time options about | |
29 | naming and tuning parameters. If you do, then you can create your | |
30 | own malloc.h that does include all settings by cutting at the point | |
31 | indicated below. | |
32 | ||
33 | * Why use this malloc? | |
34 | ||
35 | This is not the fastest, most space-conserving, most portable, or | |
36 | most tunable malloc ever written. However it is among the fastest | |
37 | while also being among the most space-conserving, portable and tunable. | |
38 | Consistent balance across these factors results in a good general-purpose | |
39 | allocator for malloc-intensive programs. | |
40 | ||
41 | The main properties of the algorithms are: | |
42 | * For large (>= 512 bytes) requests, it is a pure best-fit allocator, | |
43 | with ties normally decided via FIFO (i.e. least recently used). | |
44 | * For small (<= 64 bytes by default) requests, it is a caching | |
45 | allocator, that maintains pools of quickly recycled chunks. | |
46 | * In between, and for combinations of large and small requests, it does | |
47 | the best it can trying to meet both goals at once. | |
48 | * For very large requests (>= 128KB by default), it relies on system | |
49 | memory mapping facilities, if supported. | |
50 | ||
51 | For a longer but slightly out of date high-level description, see | |
52 | http://gee.cs.oswego.edu/dl/html/malloc.html | |
53 | ||
54 | You may already by default be using a C library containing a malloc | |
55 | that is based on some version of this malloc (for example in | |
56 | linux). You might still want to use the one in this file in order to | |
57 | customize settings or to avoid overheads associated with library | |
58 | versions. | |
59 | ||
60 | * Contents, described in more detail in "description of public routines" below. | |
61 | ||
62 | Standard (ANSI/SVID/...) functions: | |
63 | malloc(size_t n); | |
64 | calloc(size_t n_elements, size_t element_size); | |
65 | free(Void_t* p); | |
66 | realloc(Void_t* p, size_t n); | |
67 | memalign(size_t alignment, size_t n); | |
68 | valloc(size_t n); | |
69 | mallinfo() | |
70 | mallopt(int parameter_number, int parameter_value) | |
71 | ||
72 | Additional functions: | |
73 | independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]); | |
74 | independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]); | |
75 | pvalloc(size_t n); | |
76 | cfree(Void_t* p); | |
77 | malloc_trim(size_t pad); | |
78 | malloc_usable_size(Void_t* p); | |
79 | malloc_stats(); | |
80 | ||
81 | * Vital statistics: | |
82 | ||
83 | Supported pointer representation: 4 or 8 bytes | |
84 | Supported size_t representation: 4 or 8 bytes | |
85 | Note that size_t is allowed to be 4 bytes even if pointers are 8. | |
86 | You can adjust this by defining INTERNAL_SIZE_T | |
87 | ||
88 | Alignment: 2 * sizeof(size_t) (default) | |
89 | (i.e., 8 byte alignment with 4byte size_t). This suffices for | |
90 | nearly all current machines and C compilers. However, you can | |
91 | define MALLOC_ALIGNMENT to be wider than this if necessary. | |
92 | ||
93 | Minimum overhead per allocated chunk: 4 or 8 bytes | |
94 | Each malloced chunk has a hidden word of overhead holding size | |
95 | and status information. | |
96 | ||
97 | Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead) | |
98 | 8-byte ptrs: 24/32 bytes (including, 4/8 overhead) | |
99 | ||
100 | When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte | |
101 | ptrs but 4 byte size) or 24 (for 8/8) additional bytes are | |
102 | needed; 4 (8) for a trailing size field and 8 (16) bytes for | |
103 | free list pointers. Thus, the minimum allocatable size is | |
104 | 16/24/32 bytes. | |
105 | ||
106 | Even a request for zero bytes (i.e., malloc(0)) returns a | |
107 | pointer to something of the minimum allocatable size. | |
108 | ||
109 | The maximum overhead wastage (i.e., number of extra bytes | |
110 | allocated than were requested in malloc) is less than or equal | |
111 | to the minimum size, except for requests >= mmap_threshold that | |
112 | are serviced via mmap(), where the worst case wastage is 2 * | |
113 | sizeof(size_t) bytes plus the remainder from a system page (the | |
114 | minimal mmap unit); typically 4096 or 8192 bytes. | |
115 | ||
116 | Maximum allocated size: 4-byte size_t: 2^32 minus about two pages | |
117 | 8-byte size_t: 2^64 minus about two pages | |
118 | ||
119 | It is assumed that (possibly signed) size_t values suffice to | |
120 | represent chunk sizes. `Possibly signed' is due to the fact | |
121 | that `size_t' may be defined on a system as either a signed or | |
122 | an unsigned type. The ISO C standard says that it must be | |
123 | unsigned, but a few systems are known not to adhere to this. | |
124 | Additionally, even when size_t is unsigned, sbrk (which is by | |
125 | default used to obtain memory from system) accepts signed | |
126 | arguments, and may not be able to handle size_t-wide arguments | |
127 | with negative sign bit. Generally, values that would | |
128 | appear as negative after accounting for overhead and alignment | |
129 | are supported only via mmap(), which does not have this | |
130 | limitation. | |
131 | ||
132 | Requests for sizes outside the allowed range will perform an optional | |
133 | failure action and then return null. (Requests may also | |
134 | also fail because a system is out of memory.) | |
135 | ||
136 | Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined | |
137 | ||
138 | When USE_MALLOC_LOCK is defined, wrappers are created to | |
139 | surround every public call with either a pthread mutex or | |
140 | a win32 spinlock (depending on WIN32). This is not | |
141 | especially fast, and can be a major bottleneck. | |
142 | It is designed only to provide minimal protection | |
143 | in concurrent environments, and to provide a basis for | |
144 | extensions. If you are using malloc in a concurrent program, | |
145 | you would be far better off obtaining ptmalloc, which is | |
146 | derived from a version of this malloc, and is well-tuned for | |
147 | concurrent programs. (See http://www.malloc.de) | |
148 | ||
149 | Compliance: I believe it is compliant with the 1997 Single Unix Specification | |
150 | (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably | |
151 | others as well. | |
152 | ||
153 | * Synopsis of compile-time options: | |
154 | ||
155 | People have reported using previous versions of this malloc on all | |
156 | versions of Unix, sometimes by tweaking some of the defines | |
157 | below. It has been tested most extensively on Solaris and | |
158 | Linux. It is also reported to work on WIN32 platforms. | |
159 | People also report using it in stand-alone embedded systems. | |
160 | ||
161 | The implementation is in straight, hand-tuned ANSI C. It is not | |
162 | at all modular. (Sorry!) It uses a lot of macros. To be at all | |
163 | usable, this code should be compiled using an optimizing compiler | |
164 | (for example gcc -O3) that can simplify expressions and control | |
165 | paths. (FAQ: some macros import variables as arguments rather than | |
166 | declare locals because people reported that some debuggers | |
167 | otherwise get confused.) | |
168 | ||
169 | OPTION DEFAULT VALUE | |
170 | ||
171 | Compilation Environment options: | |
172 | ||
173 | __STD_C derived from C compiler defines | |
174 | WIN32 NOT defined | |
175 | HAVE_MEMCPY defined | |
176 | USE_MEMCPY 1 if HAVE_MEMCPY is defined | |
177 | HAVE_MMAP defined as 1 | |
178 | MMAP_CLEARS 1 | |
179 | HAVE_MREMAP 0 unless linux defined | |
180 | malloc_getpagesize derived from system #includes, or 4096 if not | |
181 | HAVE_USR_INCLUDE_MALLOC_H NOT defined | |
182 | LACKS_UNISTD_H NOT defined unless WIN32 | |
183 | LACKS_SYS_PARAM_H NOT defined unless WIN32 | |
184 | LACKS_SYS_MMAN_H NOT defined unless WIN32 | |
185 | ||
186 | Changing default word sizes: | |
187 | ||
188 | INTERNAL_SIZE_T size_t | |
189 | MALLOC_ALIGNMENT 2 * sizeof(INTERNAL_SIZE_T) | |
190 | ||
191 | Configuration and functionality options: | |
192 | ||
193 | USE_DL_PREFIX NOT defined | |
194 | USE_PUBLIC_MALLOC_WRAPPERS NOT defined | |
195 | USE_MALLOC_LOCK NOT defined | |
196 | DEBUG NOT defined | |
197 | REALLOC_ZERO_BYTES_FREES NOT defined | |
198 | MALLOC_FAILURE_ACTION errno = ENOMEM, if __STD_C defined, else no-op | |
199 | TRIM_FASTBINS 0 | |
200 | ||
201 | Options for customizing MORECORE: | |
202 | ||
203 | MORECORE sbrk | |
204 | MORECORE_CONTIGUOUS 1 | |
205 | MORECORE_CANNOT_TRIM NOT defined | |
206 | MMAP_AS_MORECORE_SIZE (1024 * 1024) | |
207 | ||
208 | Tuning options that are also dynamically changeable via mallopt: | |
209 | ||
210 | DEFAULT_MXFAST 64 | |
211 | DEFAULT_TRIM_THRESHOLD 128 * 1024 | |
212 | DEFAULT_TOP_PAD 0 | |
213 | DEFAULT_MMAP_THRESHOLD 128 * 1024 | |
214 | DEFAULT_MMAP_MAX 65536 | |
215 | ||
216 | There are several other #defined constants and macros that you | |
217 | probably don't want to touch unless you are extending or adapting malloc. | |
218 | */ | |
219 | ||
220 | /* | |
221 | WIN32 sets up defaults for MS environment and compilers. | |
222 | Otherwise defaults are for unix. | |
223 | */ | |
224 | ||
225 | /* #define WIN32 */ | |
226 | ||
227 | #ifdef WIN32 | |
228 | ||
229 | #define WIN32_LEAN_AND_MEAN | |
230 | #include <windows.h> | |
231 | ||
232 | /* Win32 doesn't supply or need the following headers */ | |
233 | #define LACKS_UNISTD_H | |
234 | #define LACKS_SYS_PARAM_H | |
235 | #define LACKS_SYS_MMAN_H | |
236 | ||
237 | /* Use the supplied emulation of sbrk */ | |
238 | #define MORECORE sbrk | |
239 | #define MORECORE_CONTIGUOUS 0 | |
240 | #define MORECORE_FAILURE ((void*)(-1)) | |
241 | ||
242 | /* Use the supplied emulation of mmap and munmap */ | |
243 | #define HAVE_MMAP 1 | |
244 | #define MUNMAP_FAILURE (-1) | |
245 | #define MMAP_CLEARS 1 | |
246 | ||
247 | /* These values don't really matter in windows mmap emulation */ | |
248 | #define MAP_PRIVATE 1 | |
249 | #define MAP_ANONYMOUS 2 | |
250 | #define PROT_READ 1 | |
251 | #define PROT_WRITE 2 | |
252 | ||
253 | /* Emulation functions defined at the end of this file */ | |
254 | ||
255 | /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */ | |
256 | #ifdef USE_MALLOC_LOCK | |
257 | static int slwait(int *sl); | |
258 | static int slrelease(int *sl); | |
259 | #endif | |
260 | ||
261 | static long getpagesize(void); | |
262 | static long getregionsize(void); | |
263 | static void *sbrk(long size); | |
264 | #if HAVE_MMAP | |
265 | static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg); | |
266 | static long munmap(void *ptr, long size); | |
267 | #endif | |
268 | ||
269 | static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed); | |
270 | static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user); | |
271 | ||
272 | #endif | |
273 | ||
274 | /* | |
275 | __STD_C should be nonzero if using ANSI-standard C compiler, a C++ | |
276 | compiler, or a C compiler sufficiently close to ANSI to get away | |
277 | with it. | |
278 | */ | |
279 | ||
280 | #ifndef __STD_C | |
281 | #if defined(__STDC__) || defined(_cplusplus) | |
282 | #define __STD_C 1 | |
283 | #else | |
284 | #define __STD_C 0 | |
285 | #endif | |
286 | #endif /*__STD_C*/ | |
287 | ||
288 | ||
289 | /* | |
290 | Void_t* is the pointer type that malloc should say it returns | |
291 | */ | |
292 | ||
293 | #ifndef Void_t | |
294 | #if (__STD_C || defined(WIN32)) | |
295 | #define Void_t void | |
296 | #else | |
297 | #define Void_t char | |
298 | #endif | |
299 | #endif /*Void_t*/ | |
300 | ||
301 | #if __STD_C | |
302 | #include <stddef.h> /* for size_t */ | |
303 | #else | |
304 | #include <sys/types.h> | |
305 | #endif | |
306 | ||
307 | #ifdef __cplusplus | |
308 | extern "C" { | |
309 | #endif | |
310 | ||
311 | /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */ | |
312 | ||
313 | /* #define LACKS_UNISTD_H */ | |
314 | ||
315 | #ifndef LACKS_UNISTD_H | |
316 | #include <unistd.h> | |
317 | #endif | |
318 | ||
319 | /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */ | |
320 | ||
321 | /* #define LACKS_SYS_PARAM_H */ | |
322 | ||
323 | ||
324 | #include <stdio.h> /* needed for malloc_stats */ | |
325 | #include <errno.h> /* needed for optional MALLOC_FAILURE_ACTION */ | |
326 | ||
327 | ||
328 | /* | |
329 | Debugging: | |
330 | ||
331 | Because freed chunks may be overwritten with bookkeeping fields, this | |
332 | malloc will often die when freed memory is overwritten by user | |
333 | programs. This can be very effective (albeit in an annoying way) | |
334 | in helping track down dangling pointers. | |
335 | ||
336 | If you compile with -DDEBUG, a number of assertion checks are | |
337 | enabled that will catch more memory errors. You probably won't be | |
338 | able to make much sense of the actual assertion errors, but they | |
339 | should help you locate incorrectly overwritten memory. The | |
340 | checking is fairly extensive, and will slow down execution | |
341 | noticeably. Calling malloc_stats or mallinfo with DEBUG set will | |
342 | attempt to check every non-mmapped allocated and free chunk in the | |
343 | course of computing the summmaries. (By nature, mmapped regions | |
344 | cannot be checked very much automatically.) | |
345 | ||
346 | Setting DEBUG may also be helpful if you are trying to modify | |
347 | this code. The assertions in the check routines spell out in more | |
348 | detail the assumptions and invariants underlying the algorithms. | |
349 | ||
350 | Setting DEBUG does NOT provide an automated mechanism for checking | |
351 | that all accesses to malloced memory stay within their | |
352 | bounds. However, there are several add-ons and adaptations of this | |
353 | or other mallocs available that do this. | |
354 | */ | |
355 | ||
356 | #if DEBUG | |
357 | #include <assert.h> | |
358 | #else | |
359 | #define assert(x) ((void)0) | |
360 | #endif | |
361 | ||
362 | ||
363 | /* | |
364 | INTERNAL_SIZE_T is the word-size used for internal bookkeeping | |
365 | of chunk sizes. | |
366 | ||
367 | The default version is the same as size_t. | |
368 | ||
369 | While not strictly necessary, it is best to define this as an | |
370 | unsigned type, even if size_t is a signed type. This may avoid some | |
371 | artificial size limitations on some systems. | |
372 | ||
373 | On a 64-bit machine, you may be able to reduce malloc overhead by | |
374 | defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the | |
375 | expense of not being able to handle more than 2^32 of malloced | |
376 | space. If this limitation is acceptable, you are encouraged to set | |
377 | this unless you are on a platform requiring 16byte alignments. In | |
378 | this case the alignment requirements turn out to negate any | |
379 | potential advantages of decreasing size_t word size. | |
380 | ||
381 | Implementors: Beware of the possible combinations of: | |
382 | - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits, | |
383 | and might be the same width as int or as long | |
384 | - size_t might have different width and signedness as INTERNAL_SIZE_T | |
385 | - int and long might be 32 or 64 bits, and might be the same width | |
386 | To deal with this, most comparisons and difference computations | |
387 | among INTERNAL_SIZE_Ts should cast them to unsigned long, being | |
388 | aware of the fact that casting an unsigned int to a wider long does | |
389 | not sign-extend. (This also makes checking for negative numbers | |
390 | awkward.) Some of these casts result in harmless compiler warnings | |
391 | on some systems. | |
392 | */ | |
393 | ||
394 | #ifndef INTERNAL_SIZE_T | |
395 | #define INTERNAL_SIZE_T size_t | |
396 | #endif | |
397 | ||
398 | /* The corresponding word size */ | |
399 | #define SIZE_SZ (sizeof(INTERNAL_SIZE_T)) | |
400 | ||
401 | ||
402 | /* | |
403 | MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks. | |
404 | It must be a power of two at least 2 * SIZE_SZ, even on machines | |
405 | for which smaller alignments would suffice. It may be defined as | |
406 | larger than this though. Note however that code and data structures | |
407 | are optimized for the case of 8-byte alignment. | |
408 | */ | |
409 | ||
410 | ||
411 | #ifndef MALLOC_ALIGNMENT | |
412 | #define MALLOC_ALIGNMENT (2 * SIZE_SZ) | |
413 | #endif | |
414 | ||
415 | /* The corresponding bit mask value */ | |
416 | #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1) | |
417 | ||
418 | ||
419 | ||
420 | /* | |
421 | REALLOC_ZERO_BYTES_FREES should be set if a call to | |
422 | realloc with zero bytes should be the same as a call to free. | |
423 | Some people think it should. Otherwise, since this malloc | |
424 | returns a unique pointer for malloc(0), so does realloc(p, 0). | |
425 | */ | |
426 | ||
427 | /* #define REALLOC_ZERO_BYTES_FREES */ | |
428 | ||
429 | /* | |
430 | TRIM_FASTBINS controls whether free() of a very small chunk can | |
431 | immediately lead to trimming. Setting to true (1) can reduce memory | |
432 | footprint, but will almost always slow down programs that use a lot | |
433 | of small chunks. | |
434 | ||
435 | Define this only if you are willing to give up some speed to more | |
436 | aggressively reduce system-level memory footprint when releasing | |
437 | memory in programs that use many small chunks. You can get | |
438 | essentially the same effect by setting MXFAST to 0, but this can | |
439 | lead to even greater slowdowns in programs using many small chunks. | |
440 | TRIM_FASTBINS is an in-between compile-time option, that disables | |
441 | only those chunks bordering topmost memory from being placed in | |
442 | fastbins. | |
443 | */ | |
444 | ||
445 | #ifndef TRIM_FASTBINS | |
446 | #define TRIM_FASTBINS 0 | |
447 | #endif | |
448 | ||
449 | ||
450 | /* | |
451 | USE_DL_PREFIX will prefix all public routines with the string 'dl'. | |
452 | This is necessary when you only want to use this malloc in one part | |
453 | of a program, using your regular system malloc elsewhere. | |
454 | */ | |
455 | ||
456 | /* #define USE_DL_PREFIX */ | |
457 | ||
458 | ||
459 | /* | |
460 | USE_MALLOC_LOCK causes wrapper functions to surround each | |
461 | callable routine with pthread mutex lock/unlock. | |
462 | ||
463 | USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined | |
464 | */ | |
465 | ||
466 | ||
467 | /* #define USE_MALLOC_LOCK */ | |
468 | ||
469 | ||
470 | /* | |
471 | If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is | |
472 | actually a wrapper function that first calls MALLOC_PREACTION, then | |
473 | calls the internal routine, and follows it with | |
474 | MALLOC_POSTACTION. This is needed for locking, but you can also use | |
475 | this, without USE_MALLOC_LOCK, for purposes of interception, | |
476 | instrumentation, etc. It is a sad fact that using wrappers often | |
477 | noticeably degrades performance of malloc-intensive programs. | |
478 | */ | |
479 | ||
480 | #ifdef USE_MALLOC_LOCK | |
481 | #define USE_PUBLIC_MALLOC_WRAPPERS | |
482 | #else | |
483 | /* #define USE_PUBLIC_MALLOC_WRAPPERS */ | |
484 | #endif | |
485 | ||
486 | ||
487 | /* | |
488 | Two-phase name translation. | |
489 | All of the actual routines are given mangled names. | |
490 | When wrappers are used, they become the public callable versions. | |
491 | When DL_PREFIX is used, the callable names are prefixed. | |
492 | */ | |
493 | ||
494 | #ifndef USE_PUBLIC_MALLOC_WRAPPERS | |
495 | #define cALLOc public_cALLOc | |
496 | #define fREe public_fREe | |
497 | #define cFREe public_cFREe | |
498 | #define mALLOc public_mALLOc | |
499 | #define mEMALIGn public_mEMALIGn | |
500 | #define rEALLOc public_rEALLOc | |
501 | #define vALLOc public_vALLOc | |
502 | #define pVALLOc public_pVALLOc | |
503 | #define mALLINFo public_mALLINFo | |
504 | #define mALLOPt public_mALLOPt | |
505 | #define mTRIm public_mTRIm | |
506 | #define mSTATs public_mSTATs | |
507 | #define mUSABLe public_mUSABLe | |
508 | #define iCALLOc public_iCALLOc | |
509 | #define iCOMALLOc public_iCOMALLOc | |
510 | #endif | |
511 | ||
512 | #ifdef USE_DL_PREFIX | |
513 | #define public_cALLOc dlcalloc | |
514 | #define public_fREe dlfree | |
515 | #define public_cFREe dlcfree | |
516 | #define public_mALLOc dlmalloc | |
517 | #define public_mEMALIGn dlmemalign | |
518 | #define public_rEALLOc dlrealloc | |
519 | #define public_vALLOc dlvalloc | |
520 | #define public_pVALLOc dlpvalloc | |
521 | #define public_mALLINFo dlmallinfo | |
522 | #define public_mALLOPt dlmallopt | |
523 | #define public_mTRIm dlmalloc_trim | |
524 | #define public_mSTATs dlmalloc_stats | |
525 | #define public_mUSABLe dlmalloc_usable_size | |
526 | #define public_iCALLOc dlindependent_calloc | |
527 | #define public_iCOMALLOc dlindependent_comalloc | |
528 | #else /* USE_DL_PREFIX */ | |
529 | #define public_cALLOc calloc | |
530 | #define public_fREe free | |
531 | #define public_cFREe cfree | |
532 | #define public_mALLOc malloc | |
533 | #define public_mEMALIGn memalign | |
534 | #define public_rEALLOc realloc | |
535 | #define public_vALLOc valloc | |
536 | #define public_pVALLOc pvalloc | |
537 | #define public_mALLINFo mallinfo | |
538 | #define public_mALLOPt mallopt | |
539 | #define public_mTRIm malloc_trim | |
540 | #define public_mSTATs malloc_stats | |
541 | #define public_mUSABLe malloc_usable_size | |
542 | #define public_iCALLOc independent_calloc | |
543 | #define public_iCOMALLOc independent_comalloc | |
544 | #endif /* USE_DL_PREFIX */ | |
545 | ||
546 | ||
547 | /* | |
548 | HAVE_MEMCPY should be defined if you are not otherwise using | |
549 | ANSI STD C, but still have memcpy and memset in your C library | |
550 | and want to use them in calloc and realloc. Otherwise simple | |
551 | macro versions are defined below. | |
552 | ||
553 | USE_MEMCPY should be defined as 1 if you actually want to | |
554 | have memset and memcpy called. People report that the macro | |
555 | versions are faster than libc versions on some systems. | |
556 | ||
557 | Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks | |
558 | (of <= 36 bytes) are manually unrolled in realloc and calloc. | |
559 | */ | |
560 | ||
561 | #define HAVE_MEMCPY | |
562 | ||
563 | #ifndef USE_MEMCPY | |
564 | #ifdef HAVE_MEMCPY | |
565 | #define USE_MEMCPY 1 | |
566 | #else | |
567 | #define USE_MEMCPY 0 | |
568 | #endif | |
569 | #endif | |
570 | ||
571 | ||
572 | #if (__STD_C || defined(HAVE_MEMCPY)) | |
573 | ||
574 | #ifdef WIN32 | |
575 | /* On Win32 memset and memcpy are already declared in windows.h */ | |
576 | #else | |
577 | #if __STD_C | |
578 | void* memset(void*, int, size_t); | |
579 | void* memcpy(void*, const void*, size_t); | |
580 | #else | |
581 | Void_t* memset(); | |
582 | Void_t* memcpy(); | |
583 | #endif | |
584 | #endif | |
585 | #endif | |
586 | ||
587 | /* | |
588 | MALLOC_FAILURE_ACTION is the action to take before "return 0" when | |
589 | malloc fails to be able to return memory, either because memory is | |
590 | exhausted or because of illegal arguments. | |
591 | ||
592 | By default, sets errno if running on STD_C platform, else does nothing. | |
593 | */ | |
594 | ||
595 | #ifndef MALLOC_FAILURE_ACTION | |
596 | #if __STD_C | |
597 | #define MALLOC_FAILURE_ACTION \ | |
598 | errno = ENOMEM; | |
599 | ||
600 | #else | |
601 | #define MALLOC_FAILURE_ACTION | |
602 | #endif | |
603 | #endif | |
604 | ||
605 | /* | |
606 | MORECORE-related declarations. By default, rely on sbrk | |
607 | */ | |
608 | ||
609 | ||
610 | #ifdef LACKS_UNISTD_H | |
611 | #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) && !defined(WIN32) | |
612 | #if __STD_C | |
613 | extern Void_t* sbrk(ptrdiff_t); | |
614 | #else | |
615 | extern Void_t* sbrk(); | |
616 | #endif | |
617 | #endif | |
618 | #endif | |
619 | ||
620 | /* | |
621 | MORECORE is the name of the routine to call to obtain more memory | |
622 | from the system. See below for general guidance on writing | |
623 | alternative MORECORE functions, as well as a version for WIN32 and a | |
624 | sample version for pre-OSX macos. | |
625 | */ | |
626 | ||
627 | #ifndef MORECORE | |
628 | #define MORECORE sbrk | |
629 | #endif | |
630 | ||
631 | /* | |
632 | MORECORE_FAILURE is the value returned upon failure of MORECORE | |
633 | as well as mmap. Since it cannot be an otherwise valid memory address, | |
634 | and must reflect values of standard sys calls, you probably ought not | |
635 | try to redefine it. | |
636 | */ | |
637 | ||
638 | #ifndef MORECORE_FAILURE | |
639 | #define MORECORE_FAILURE (-1) | |
640 | #endif | |
641 | ||
642 | /* | |
643 | If MORECORE_CONTIGUOUS is true, take advantage of fact that | |
644 | consecutive calls to MORECORE with positive arguments always return | |
645 | contiguous increasing addresses. This is true of unix sbrk. Even | |
646 | if not defined, when regions happen to be contiguous, malloc will | |
647 | permit allocations spanning regions obtained from different | |
648 | calls. But defining this when applicable enables some stronger | |
649 | consistency checks and space efficiencies. | |
650 | */ | |
651 | ||
652 | #ifndef MORECORE_CONTIGUOUS | |
653 | #define MORECORE_CONTIGUOUS 1 | |
654 | #endif | |
655 | ||
656 | /* | |
657 | Define MORECORE_CANNOT_TRIM if your version of MORECORE | |
658 | cannot release space back to the system when given negative | |
659 | arguments. This is generally necessary only if you are using | |
660 | a hand-crafted MORECORE function that cannot handle negative arguments. | |
661 | */ | |
662 | ||
663 | /* #define MORECORE_CANNOT_TRIM */ | |
664 | ||
665 | ||
666 | /* | |
667 | Define HAVE_MMAP as true to optionally make malloc() use mmap() to | |
668 | allocate very large blocks. These will be returned to the | |
669 | operating system immediately after a free(). Also, if mmap | |
670 | is available, it is used as a backup strategy in cases where | |
671 | MORECORE fails to provide space from system. | |
672 | ||
673 | This malloc is best tuned to work with mmap for large requests. | |
674 | If you do not have mmap, operations involving very large chunks (1MB | |
675 | or so) may be slower than you'd like. | |
676 | */ | |
677 | ||
678 | #ifndef HAVE_MMAP | |
679 | #define HAVE_MMAP 1 | |
680 | ||
681 | /* | |
682 | Standard unix mmap using /dev/zero clears memory so calloc doesn't | |
683 | need to. | |
684 | */ | |
685 | ||
686 | #ifndef MMAP_CLEARS | |
687 | #define MMAP_CLEARS 1 | |
688 | #endif | |
689 | ||
690 | #else /* no mmap */ | |
691 | #ifndef MMAP_CLEARS | |
692 | #define MMAP_CLEARS 0 | |
693 | #endif | |
694 | #endif | |
695 | ||
696 | ||
697 | /* | |
698 | MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if | |
699 | sbrk fails, and mmap is used as a backup (which is done only if | |
700 | HAVE_MMAP). The value must be a multiple of page size. This | |
701 | backup strategy generally applies only when systems have "holes" in | |
702 | address space, so sbrk cannot perform contiguous expansion, but | |
703 | there is still space available on system. On systems for which | |
704 | this is known to be useful (i.e. most linux kernels), this occurs | |
705 | only when programs allocate huge amounts of memory. Between this, | |
706 | and the fact that mmap regions tend to be limited, the size should | |
707 | be large, to avoid too many mmap calls and thus avoid running out | |
708 | of kernel resources. | |
709 | */ | |
710 | ||
711 | #ifndef MMAP_AS_MORECORE_SIZE | |
712 | #define MMAP_AS_MORECORE_SIZE (1024 * 1024) | |
713 | #endif | |
714 | ||
715 | /* | |
716 | Define HAVE_MREMAP to make realloc() use mremap() to re-allocate | |
717 | large blocks. This is currently only possible on Linux with | |
718 | kernel versions newer than 1.3.77. | |
719 | */ | |
720 | ||
721 | #ifndef HAVE_MREMAP | |
722 | #ifdef linux | |
723 | #define HAVE_MREMAP 1 | |
724 | #else | |
725 | #define HAVE_MREMAP 0 | |
726 | #endif | |
727 | ||
728 | #endif /* HAVE_MMAP */ | |
729 | ||
730 | ||
731 | /* | |
732 | The system page size. To the extent possible, this malloc manages | |
733 | memory from the system in page-size units. Note that this value is | |
734 | cached during initialization into a field of malloc_state. So even | |
735 | if malloc_getpagesize is a function, it is only called once. | |
736 | ||
737 | The following mechanics for getpagesize were adapted from bsd/gnu | |
738 | getpagesize.h. If none of the system-probes here apply, a value of | |
739 | 4096 is used, which should be OK: If they don't apply, then using | |
740 | the actual value probably doesn't impact performance. | |
741 | */ | |
742 | ||
743 | ||
744 | #ifndef malloc_getpagesize | |
745 | ||
746 | #ifndef LACKS_UNISTD_H | |
747 | # include <unistd.h> | |
748 | #endif | |
749 | ||
750 | # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ | |
751 | # ifndef _SC_PAGE_SIZE | |
752 | # define _SC_PAGE_SIZE _SC_PAGESIZE | |
753 | # endif | |
754 | # endif | |
755 | ||
756 | # ifdef _SC_PAGE_SIZE | |
757 | # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) | |
758 | # else | |
759 | # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) | |
760 | extern size_t getpagesize(); | |
761 | # define malloc_getpagesize getpagesize() | |
762 | # else | |
763 | # ifdef WIN32 /* use supplied emulation of getpagesize */ | |
764 | # define malloc_getpagesize getpagesize() | |
765 | # else | |
766 | # ifndef LACKS_SYS_PARAM_H | |
767 | # include <sys/param.h> | |
768 | # endif | |
769 | # ifdef EXEC_PAGESIZE | |
770 | # define malloc_getpagesize EXEC_PAGESIZE | |
771 | # else | |
772 | # ifdef NBPG | |
773 | # ifndef CLSIZE | |
774 | # define malloc_getpagesize NBPG | |
775 | # else | |
776 | # define malloc_getpagesize (NBPG * CLSIZE) | |
777 | # endif | |
778 | # else | |
779 | # ifdef NBPC | |
780 | # define malloc_getpagesize NBPC | |
781 | # else | |
782 | # ifdef PAGESIZE | |
783 | # define malloc_getpagesize PAGESIZE | |
784 | # else /* just guess */ | |
785 | # define malloc_getpagesize (4096) | |
786 | # endif | |
787 | # endif | |
788 | # endif | |
789 | # endif | |
790 | # endif | |
791 | # endif | |
792 | # endif | |
793 | #endif | |
794 | ||
795 | /* | |
796 | This version of malloc supports the standard SVID/XPG mallinfo | |
797 | routine that returns a struct containing usage properties and | |
798 | statistics. It should work on any SVID/XPG compliant system that has | |
799 | a /usr/include/malloc.h defining struct mallinfo. (If you'd like to | |
800 | install such a thing yourself, cut out the preliminary declarations | |
801 | as described above and below and save them in a malloc.h file. But | |
802 | there's no compelling reason to bother to do this.) | |
803 | ||
804 | The main declaration needed is the mallinfo struct that is returned | |
805 | (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a | |
806 | bunch of field that are not even meaningful in this version of | |
807 | malloc. These fields are are instead filled by mallinfo() with | |
808 | other numbers that might be of interest. | |
809 | ||
810 | HAVE_USR_INCLUDE_MALLOC_H should be set if you have a | |
811 | /usr/include/malloc.h file that includes a declaration of struct | |
812 | mallinfo. If so, it is included; else an SVID2/XPG2 compliant | |
813 | version is declared below. These must be precisely the same for | |
814 | mallinfo() to work. The original SVID version of this struct, | |
815 | defined on most systems with mallinfo, declares all fields as | |
816 | ints. But some others define as unsigned long. If your system | |
817 | defines the fields using a type of different width than listed here, | |
818 | you must #include your system version and #define | |
819 | HAVE_USR_INCLUDE_MALLOC_H. | |
820 | */ | |
821 | ||
822 | /* #define HAVE_USR_INCLUDE_MALLOC_H */ | |
823 | ||
824 | #ifdef HAVE_USR_INCLUDE_MALLOC_H | |
825 | #include "/usr/include/malloc.h" | |
826 | #else | |
827 | ||
828 | /* SVID2/XPG mallinfo structure */ | |
829 | ||
830 | struct mallinfo { | |
831 | int arena; /* non-mmapped space allocated from system */ | |
832 | int ordblks; /* number of free chunks */ | |
833 | int smblks; /* number of fastbin blocks */ | |
834 | int hblks; /* number of mmapped regions */ | |
835 | int hblkhd; /* space in mmapped regions */ | |
836 | int usmblks; /* maximum total allocated space */ | |
837 | int fsmblks; /* space available in freed fastbin blocks */ | |
838 | int uordblks; /* total allocated space */ | |
839 | int fordblks; /* total free space */ | |
840 | int keepcost; /* top-most, releasable (via malloc_trim) space */ | |
841 | }; | |
842 | ||
843 | /* | |
844 | SVID/XPG defines four standard parameter numbers for mallopt, | |
845 | normally defined in malloc.h. Only one of these (M_MXFAST) is used | |
846 | in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply, | |
847 | so setting them has no effect. But this malloc also supports other | |
848 | options in mallopt described below. | |
849 | */ | |
850 | #endif | |
851 | ||
852 | ||
853 | /* ---------- description of public routines ------------ */ | |
854 | ||
855 | /* | |
856 | malloc(size_t n) | |
857 | Returns a pointer to a newly allocated chunk of at least n bytes, or null | |
858 | if no space is available. Additionally, on failure, errno is | |
859 | set to ENOMEM on ANSI C systems. | |
860 | ||
861 | If n is zero, malloc returns a minumum-sized chunk. (The minimum | |
862 | size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit | |
863 | systems.) On most systems, size_t is an unsigned type, so calls | |
864 | with negative arguments are interpreted as requests for huge amounts | |
865 | of space, which will often fail. The maximum supported value of n | |
866 | differs across systems, but is in all cases less than the maximum | |
867 | representable value of a size_t. | |
868 | */ | |
869 | #if __STD_C | |
870 | Void_t* public_mALLOc(size_t); | |
871 | #else | |
872 | Void_t* public_mALLOc(); | |
873 | #endif | |
874 | ||
875 | /* | |
876 | free(Void_t* p) | |
877 | Releases the chunk of memory pointed to by p, that had been previously | |
878 | allocated using malloc or a related routine such as realloc. | |
879 | It has no effect if p is null. It can have arbitrary (i.e., bad!) | |
880 | effects if p has already been freed. | |
881 | ||
882 | Unless disabled (using mallopt), freeing very large spaces will | |
883 | when possible, automatically trigger operations that give | |
884 | back unused memory to the system, thus reducing program footprint. | |
885 | */ | |
886 | #if __STD_C | |
887 | void public_fREe(Void_t*); | |
888 | #else | |
889 | void public_fREe(); | |
890 | #endif | |
891 | ||
892 | /* | |
893 | calloc(size_t n_elements, size_t element_size); | |
894 | Returns a pointer to n_elements * element_size bytes, with all locations | |
895 | set to zero. | |
896 | */ | |
897 | #if __STD_C | |
898 | Void_t* public_cALLOc(size_t, size_t); | |
899 | #else | |
900 | Void_t* public_cALLOc(); | |
901 | #endif | |
902 | ||
903 | /* | |
904 | realloc(Void_t* p, size_t n) | |
905 | Returns a pointer to a chunk of size n that contains the same data | |
906 | as does chunk p up to the minimum of (n, p's size) bytes, or null | |
907 | if no space is available. | |
908 | ||
909 | The returned pointer may or may not be the same as p. The algorithm | |
910 | prefers extending p when possible, otherwise it employs the | |
911 | equivalent of a malloc-copy-free sequence. | |
912 | ||
913 | If p is null, realloc is equivalent to malloc. | |
914 | ||
915 | If space is not available, realloc returns null, errno is set (if on | |
916 | ANSI) and p is NOT freed. | |
917 | ||
918 | if n is for fewer bytes than already held by p, the newly unused | |
919 | space is lopped off and freed if possible. Unless the #define | |
920 | REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of | |
921 | zero (re)allocates a minimum-sized chunk. | |
922 | ||
923 | Large chunks that were internally obtained via mmap will always | |
924 | be reallocated using malloc-copy-free sequences unless | |
925 | the system supports MREMAP (currently only linux). | |
926 | ||
927 | The old unix realloc convention of allowing the last-free'd chunk | |
928 | to be used as an argument to realloc is not supported. | |
929 | */ | |
930 | #if __STD_C | |
931 | Void_t* public_rEALLOc(Void_t*, size_t); | |
932 | #else | |
933 | Void_t* public_rEALLOc(); | |
934 | #endif | |
935 | ||
936 | /* | |
937 | memalign(size_t alignment, size_t n); | |
938 | Returns a pointer to a newly allocated chunk of n bytes, aligned | |
939 | in accord with the alignment argument. | |
940 | ||
941 | The alignment argument should be a power of two. If the argument is | |
942 | not a power of two, the nearest greater power is used. | |
943 | 8-byte alignment is guaranteed by normal malloc calls, so don't | |
944 | bother calling memalign with an argument of 8 or less. | |
945 | ||
946 | Overreliance on memalign is a sure way to fragment space. | |
947 | */ | |
948 | #if __STD_C | |
949 | Void_t* public_mEMALIGn(size_t, size_t); | |
950 | #else | |
951 | Void_t* public_mEMALIGn(); | |
952 | #endif | |
953 | ||
954 | /* | |
955 | valloc(size_t n); | |
956 | Equivalent to memalign(pagesize, n), where pagesize is the page | |
957 | size of the system. If the pagesize is unknown, 4096 is used. | |
958 | */ | |
959 | #if __STD_C | |
960 | Void_t* public_vALLOc(size_t); | |
961 | #else | |
962 | Void_t* public_vALLOc(); | |
963 | #endif | |
964 | ||
965 | ||
966 | ||
967 | /* | |
968 | mallopt(int parameter_number, int parameter_value) | |
969 | Sets tunable parameters The format is to provide a | |
970 | (parameter-number, parameter-value) pair. mallopt then sets the | |
971 | corresponding parameter to the argument value if it can (i.e., so | |
972 | long as the value is meaningful), and returns 1 if successful else | |
973 | 0. SVID/XPG/ANSI defines four standard param numbers for mallopt, | |
974 | normally defined in malloc.h. Only one of these (M_MXFAST) is used | |
975 | in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply, | |
976 | so setting them has no effect. But this malloc also supports four | |
977 | other options in mallopt. See below for details. Briefly, supported | |
978 | parameters are as follows (listed defaults are for "typical" | |
979 | configurations). | |
980 | ||
981 | Symbol param # default allowed param values | |
982 | M_MXFAST 1 64 0-80 (0 disables fastbins) | |
983 | M_TRIM_THRESHOLD -1 128*1024 any (-1U disables trimming) | |
984 | M_TOP_PAD -2 0 any | |
985 | M_MMAP_THRESHOLD -3 128*1024 any (or 0 if no MMAP support) | |
986 | M_MMAP_MAX -4 65536 any (0 disables use of mmap) | |
987 | */ | |
988 | #if __STD_C | |
989 | int public_mALLOPt(int, int); | |
990 | #else | |
991 | int public_mALLOPt(); | |
992 | #endif | |
993 | ||
994 | ||
995 | /* | |
996 | mallinfo() | |
997 | Returns (by copy) a struct containing various summary statistics: | |
998 | ||
999 | arena: current total non-mmapped bytes allocated from system | |
1000 | ordblks: the number of free chunks | |
1001 | smblks: the number of fastbin blocks (i.e., small chunks that | |
1002 | have been freed but not use resused or consolidated) | |
1003 | hblks: current number of mmapped regions | |
1004 | hblkhd: total bytes held in mmapped regions | |
1005 | usmblks: the maximum total allocated space. This will be greater | |
1006 | than current total if trimming has occurred. | |
1007 | fsmblks: total bytes held in fastbin blocks | |
1008 | uordblks: current total allocated space (normal or mmapped) | |
1009 | fordblks: total free space | |
1010 | keepcost: the maximum number of bytes that could ideally be released | |
1011 | back to system via malloc_trim. ("ideally" means that | |
1012 | it ignores page restrictions etc.) | |
1013 | ||
1014 | Because these fields are ints, but internal bookkeeping may | |
1015 | be kept as longs, the reported values may wrap around zero and | |
1016 | thus be inaccurate. | |
1017 | */ | |
1018 | #if __STD_C | |
1019 | struct mallinfo public_mALLINFo(void); | |
1020 | #else | |
1021 | struct mallinfo public_mALLINFo(); | |
1022 | #endif | |
1023 | ||
1024 | /* | |
1025 | independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]); | |
1026 | ||
1027 | independent_calloc is similar to calloc, but instead of returning a | |
1028 | single cleared space, it returns an array of pointers to n_elements | |
1029 | independent elements that can hold contents of size elem_size, each | |
1030 | of which starts out cleared, and can be independently freed, | |
1031 | realloc'ed etc. The elements are guaranteed to be adjacently | |
1032 | allocated (this is not guaranteed to occur with multiple callocs or | |
1033 | mallocs), which may also improve cache locality in some | |
1034 | applications. | |
1035 | ||
1036 | The "chunks" argument is optional (i.e., may be null, which is | |
1037 | probably the most typical usage). If it is null, the returned array | |
1038 | is itself dynamically allocated and should also be freed when it is | |
1039 | no longer needed. Otherwise, the chunks array must be of at least | |
1040 | n_elements in length. It is filled in with the pointers to the | |
1041 | chunks. | |
1042 | ||
1043 | In either case, independent_calloc returns this pointer array, or | |
1044 | null if the allocation failed. If n_elements is zero and "chunks" | |
1045 | is null, it returns a chunk representing an array with zero elements | |
1046 | (which should be freed if not wanted). | |
1047 | ||
1048 | Each element must be individually freed when it is no longer | |
1049 | needed. If you'd like to instead be able to free all at once, you | |
1050 | should instead use regular calloc and assign pointers into this | |
1051 | space to represent elements. (In this case though, you cannot | |
1052 | independently free elements.) | |
1053 | ||
1054 | independent_calloc simplifies and speeds up implementations of many | |
1055 | kinds of pools. It may also be useful when constructing large data | |
1056 | structures that initially have a fixed number of fixed-sized nodes, | |
1057 | but the number is not known at compile time, and some of the nodes | |
1058 | may later need to be freed. For example: | |
1059 | ||
1060 | struct Node { int item; struct Node* next; }; | |
1061 | ||
1062 | struct Node* build_list() { | |
1063 | struct Node** pool; | |
1064 | int n = read_number_of_nodes_needed(); | |
1065 | if (n <= 0) return 0; | |
1066 | pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); | |
1067 | if (pool == 0) die(); | |
1068 | // organize into a linked list... | |
1069 | struct Node* first = pool[0]; | |
1070 | for (i = 0; i < n-1; ++i) | |
1071 | pool[i]->next = pool[i+1]; | |
1072 | free(pool); // Can now free the array (or not, if it is needed later) | |
1073 | return first; | |
1074 | } | |
1075 | */ | |
1076 | #if __STD_C | |
1077 | Void_t** public_iCALLOc(size_t, size_t, Void_t**); | |
1078 | #else | |
1079 | Void_t** public_iCALLOc(); | |
1080 | #endif | |
1081 | ||
1082 | /* | |
1083 | independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]); | |
1084 | ||
1085 | independent_comalloc allocates, all at once, a set of n_elements | |
1086 | chunks with sizes indicated in the "sizes" array. It returns | |
1087 | an array of pointers to these elements, each of which can be | |
1088 | independently freed, realloc'ed etc. The elements are guaranteed to | |
1089 | be adjacently allocated (this is not guaranteed to occur with | |
1090 | multiple callocs or mallocs), which may also improve cache locality | |
1091 | in some applications. | |
1092 | ||
1093 | The "chunks" argument is optional (i.e., may be null). If it is null | |
1094 | the returned array is itself dynamically allocated and should also | |
1095 | be freed when it is no longer needed. Otherwise, the chunks array | |
1096 | must be of at least n_elements in length. It is filled in with the | |
1097 | pointers to the chunks. | |
1098 | ||
1099 | In either case, independent_comalloc returns this pointer array, or | |
1100 | null if the allocation failed. If n_elements is zero and chunks is | |
1101 | null, it returns a chunk representing an array with zero elements | |
1102 | (which should be freed if not wanted). | |
1103 | ||
1104 | Each element must be individually freed when it is no longer | |
1105 | needed. If you'd like to instead be able to free all at once, you | |
1106 | should instead use a single regular malloc, and assign pointers at | |
1107 | particular offsets in the aggregate space. (In this case though, you | |
1108 | cannot independently free elements.) | |
1109 | ||
1110 | independent_comallac differs from independent_calloc in that each | |
1111 | element may have a different size, and also that it does not | |
1112 | automatically clear elements. | |
1113 | ||
1114 | independent_comalloc can be used to speed up allocation in cases | |
1115 | where several structs or objects must always be allocated at the | |
1116 | same time. For example: | |
1117 | ||
1118 | struct Head { ... } | |
1119 | struct Foot { ... } | |
1120 | ||
1121 | void send_message(char* msg) { | |
1122 | int msglen = strlen(msg); | |
1123 | size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; | |
1124 | void* chunks[3]; | |
1125 | if (independent_comalloc(3, sizes, chunks) == 0) | |
1126 | die(); | |
1127 | struct Head* head = (struct Head*)(chunks[0]); | |
1128 | char* body = (char*)(chunks[1]); | |
1129 | struct Foot* foot = (struct Foot*)(chunks[2]); | |
1130 | // ... | |
1131 | } | |
1132 | ||
1133 | In general though, independent_comalloc is worth using only for | |
1134 | larger values of n_elements. For small values, you probably won't | |
1135 | detect enough difference from series of malloc calls to bother. | |
1136 | ||
1137 | Overuse of independent_comalloc can increase overall memory usage, | |
1138 | since it cannot reuse existing noncontiguous small chunks that | |
1139 | might be available for some of the elements. | |
1140 | */ | |
1141 | #if __STD_C | |
1142 | Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**); | |
1143 | #else | |
1144 | Void_t** public_iCOMALLOc(); | |
1145 | #endif | |
1146 | ||
1147 | ||
1148 | /* | |
1149 | pvalloc(size_t n); | |
1150 | Equivalent to valloc(minimum-page-that-holds(n)), that is, | |
1151 | round up n to nearest pagesize. | |
1152 | */ | |
1153 | #if __STD_C | |
1154 | Void_t* public_pVALLOc(size_t); | |
1155 | #else | |
1156 | Void_t* public_pVALLOc(); | |
1157 | #endif | |
1158 | ||
1159 | /* | |
1160 | cfree(Void_t* p); | |
1161 | Equivalent to free(p). | |
1162 | ||
1163 | cfree is needed/defined on some systems that pair it with calloc, | |
1164 | for odd historical reasons (such as: cfree is used in example | |
1165 | code in the first edition of K&R). | |
1166 | */ | |
1167 | #if __STD_C | |
1168 | void public_cFREe(Void_t*); | |
1169 | #else | |
1170 | void public_cFREe(); | |
1171 | #endif | |
1172 | ||
1173 | /* | |
1174 | malloc_trim(size_t pad); | |
1175 | ||
1176 | If possible, gives memory back to the system (via negative | |
1177 | arguments to sbrk) if there is unused memory at the `high' end of | |
1178 | the malloc pool. You can call this after freeing large blocks of | |
1179 | memory to potentially reduce the system-level memory requirements | |
1180 | of a program. However, it cannot guarantee to reduce memory. Under | |
1181 | some allocation patterns, some large free blocks of memory will be | |
1182 | locked between two used chunks, so they cannot be given back to | |
1183 | the system. | |
1184 | ||
1185 | The `pad' argument to malloc_trim represents the amount of free | |
1186 | trailing space to leave untrimmed. If this argument is zero, | |
1187 | only the minimum amount of memory to maintain internal data | |
1188 | structures will be left (one page or less). Non-zero arguments | |
1189 | can be supplied to maintain enough trailing space to service | |
1190 | future expected allocations without having to re-obtain memory | |
1191 | from the system. | |
1192 | ||
1193 | Malloc_trim returns 1 if it actually released any memory, else 0. | |
1194 | On systems that do not support "negative sbrks", it will always | |
1195 | rreturn 0. | |
1196 | */ | |
1197 | #if __STD_C | |
1198 | int public_mTRIm(size_t); | |
1199 | #else | |
1200 | int public_mTRIm(); | |
1201 | #endif | |
1202 | ||
1203 | /* | |
1204 | malloc_usable_size(Void_t* p); | |
1205 | ||
1206 | Returns the number of bytes you can actually use in | |
1207 | an allocated chunk, which may be more than you requested (although | |
1208 | often not) due to alignment and minimum size constraints. | |
1209 | You can use this many bytes without worrying about | |
1210 | overwriting other allocated objects. This is not a particularly great | |
1211 | programming practice. malloc_usable_size can be more useful in | |
1212 | debugging and assertions, for example: | |
1213 | ||
1214 | p = malloc(n); | |
1215 | assert(malloc_usable_size(p) >= 256); | |
1216 | ||
1217 | */ | |
1218 | #if __STD_C | |
1219 | size_t public_mUSABLe(Void_t*); | |
1220 | #else | |
1221 | size_t public_mUSABLe(); | |
1222 | #endif | |
1223 | ||
1224 | /* | |
1225 | malloc_stats(); | |
1226 | Prints on stderr the amount of space obtained from the system (both | |
1227 | via sbrk and mmap), the maximum amount (which may be more than | |
1228 | current if malloc_trim and/or munmap got called), and the current | |
1229 | number of bytes allocated via malloc (or realloc, etc) but not yet | |
1230 | freed. Note that this is the number of bytes allocated, not the | |
1231 | number requested. It will be larger than the number requested | |
1232 | because of alignment and bookkeeping overhead. Because it includes | |
1233 | alignment wastage as being in use, this figure may be greater than | |
1234 | zero even when no user-level chunks are allocated. | |
1235 | ||
1236 | The reported current and maximum system memory can be inaccurate if | |
1237 | a program makes other calls to system memory allocation functions | |
1238 | (normally sbrk) outside of malloc. | |
1239 | ||
1240 | malloc_stats prints only the most commonly interesting statistics. | |
1241 | More information can be obtained by calling mallinfo. | |
1242 | ||
1243 | */ | |
1244 | #if __STD_C | |
1245 | void public_mSTATs(void); | |
1246 | #else | |
1247 | void public_mSTATs(); | |
1248 | #endif | |
1249 | ||
1250 | /* mallopt tuning options */ | |
1251 | ||
1252 | /* | |
1253 | M_MXFAST is the maximum request size used for "fastbins", special bins | |
1254 | that hold returned chunks without consolidating their spaces. This | |
1255 | enables future requests for chunks of the same size to be handled | |
1256 | very quickly, but can increase fragmentation, and thus increase the | |
1257 | overall memory footprint of a program. | |
1258 | ||
1259 | This malloc manages fastbins very conservatively yet still | |
1260 | efficiently, so fragmentation is rarely a problem for values less | |
1261 | than or equal to the default. The maximum supported value of MXFAST | |
1262 | is 80. You wouldn't want it any higher than this anyway. Fastbins | |
1263 | are designed especially for use with many small structs, objects or | |
1264 | strings -- the default handles structs/objects/arrays with sizes up | |
1265 | to 8 4byte fields, or small strings representing words, tokens, | |
1266 | etc. Using fastbins for larger objects normally worsens | |
1267 | fragmentation without improving speed. | |
1268 | ||
1269 | M_MXFAST is set in REQUEST size units. It is internally used in | |
1270 | chunksize units, which adds padding and alignment. You can reduce | |
1271 | M_MXFAST to 0 to disable all use of fastbins. This causes the malloc | |
1272 | algorithm to be a closer approximation of fifo-best-fit in all cases, | |
1273 | not just for larger requests, but will generally cause it to be | |
1274 | slower. | |
1275 | */ | |
1276 | ||
1277 | ||
1278 | /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */ | |
1279 | #ifndef M_MXFAST | |
1280 | #define M_MXFAST 1 | |
1281 | #endif | |
1282 | ||
1283 | #ifndef DEFAULT_MXFAST | |
1284 | #define DEFAULT_MXFAST 64 | |
1285 | #endif | |
1286 | ||
1287 | ||
1288 | /* | |
1289 | M_TRIM_THRESHOLD is the maximum amount of unused top-most memory | |
1290 | to keep before releasing via malloc_trim in free(). | |
1291 | ||
1292 | Automatic trimming is mainly useful in long-lived programs. | |
1293 | Because trimming via sbrk can be slow on some systems, and can | |
1294 | sometimes be wasteful (in cases where programs immediately | |
1295 | afterward allocate more large chunks) the value should be high | |
1296 | enough so that your overall system performance would improve by | |
1297 | releasing this much memory. | |
1298 | ||
1299 | The trim threshold and the mmap control parameters (see below) | |
1300 | can be traded off with one another. Trimming and mmapping are | |
1301 | two different ways of releasing unused memory back to the | |
1302 | system. Between these two, it is often possible to keep | |
1303 | system-level demands of a long-lived program down to a bare | |
1304 | minimum. For example, in one test suite of sessions measuring | |
1305 | the XF86 X server on Linux, using a trim threshold of 128K and a | |
1306 | mmap threshold of 192K led to near-minimal long term resource | |
1307 | consumption. | |
1308 | ||
1309 | If you are using this malloc in a long-lived program, it should | |
1310 | pay to experiment with these values. As a rough guide, you | |
1311 | might set to a value close to the average size of a process | |
1312 | (program) running on your system. Releasing this much memory | |
1313 | would allow such a process to run in memory. Generally, it's | |
1314 | worth it to tune for trimming rather tham memory mapping when a | |
1315 | program undergoes phases where several large chunks are | |
1316 | allocated and released in ways that can reuse each other's | |
1317 | storage, perhaps mixed with phases where there are no such | |
1318 | chunks at all. And in well-behaved long-lived programs, | |
1319 | controlling release of large blocks via trimming versus mapping | |
1320 | is usually faster. | |
1321 | ||
1322 | However, in most programs, these parameters serve mainly as | |
1323 | protection against the system-level effects of carrying around | |
1324 | massive amounts of unneeded memory. Since frequent calls to | |
1325 | sbrk, mmap, and munmap otherwise degrade performance, the default | |
1326 | parameters are set to relatively high values that serve only as | |
1327 | safeguards. | |
1328 | ||
1329 | The trim value It must be greater than page size to have any useful | |
1330 | effect. To disable trimming completely, you can set to | |
1331 | (unsigned long)(-1) | |
1332 | ||
1333 | Trim settings interact with fastbin (MXFAST) settings: Unless | |
1334 | TRIM_FASTBINS is defined, automatic trimming never takes place upon | |
1335 | freeing a chunk with size less than or equal to MXFAST. Trimming is | |
1336 | instead delayed until subsequent freeing of larger chunks. However, | |
1337 | you can still force an attempted trim by calling malloc_trim. | |
1338 | ||
1339 | Also, trimming is not generally possible in cases where | |
1340 | the main arena is obtained via mmap. | |
1341 | ||
1342 | Note that the trick some people use of mallocing a huge space and | |
1343 | then freeing it at program startup, in an attempt to reserve system | |
1344 | memory, doesn't have the intended effect under automatic trimming, | |
1345 | since that memory will immediately be returned to the system. | |
1346 | */ | |
1347 | ||
1348 | #define M_TRIM_THRESHOLD -1 | |
1349 | ||
1350 | #ifndef DEFAULT_TRIM_THRESHOLD | |
1351 | #define DEFAULT_TRIM_THRESHOLD (128 * 1024) | |
1352 | #endif | |
1353 | ||
1354 | /* | |
1355 | M_TOP_PAD is the amount of extra `padding' space to allocate or | |
1356 | retain whenever sbrk is called. It is used in two ways internally: | |
1357 | ||
1358 | * When sbrk is called to extend the top of the arena to satisfy | |
1359 | a new malloc request, this much padding is added to the sbrk | |
1360 | request. | |
1361 | ||
1362 | * When malloc_trim is called automatically from free(), | |
1363 | it is used as the `pad' argument. | |
1364 | ||
1365 | In both cases, the actual amount of padding is rounded | |
1366 | so that the end of the arena is always a system page boundary. | |
1367 | ||
1368 | The main reason for using padding is to avoid calling sbrk so | |
1369 | often. Having even a small pad greatly reduces the likelihood | |
1370 | that nearly every malloc request during program start-up (or | |
1371 | after trimming) will invoke sbrk, which needlessly wastes | |
1372 | time. | |
1373 | ||
1374 | Automatic rounding-up to page-size units is normally sufficient | |
1375 | to avoid measurable overhead, so the default is 0. However, in | |
1376 | systems where sbrk is relatively slow, it can pay to increase | |
1377 | this value, at the expense of carrying around more memory than | |
1378 | the program needs. | |
1379 | */ | |
1380 | ||
1381 | #define M_TOP_PAD -2 | |
1382 | ||
1383 | #ifndef DEFAULT_TOP_PAD | |
1384 | #define DEFAULT_TOP_PAD (0) | |
1385 | #endif | |
1386 | ||
1387 | /* | |
1388 | M_MMAP_THRESHOLD is the request size threshold for using mmap() | |
1389 | to service a request. Requests of at least this size that cannot | |
1390 | be allocated using already-existing space will be serviced via mmap. | |
1391 | (If enough normal freed space already exists it is used instead.) | |
1392 | ||
1393 | Using mmap segregates relatively large chunks of memory so that | |
1394 | they can be individually obtained and released from the host | |
1395 | system. A request serviced through mmap is never reused by any | |
1396 | other request (at least not directly; the system may just so | |
1397 | happen to remap successive requests to the same locations). | |
1398 | ||
1399 | Segregating space in this way has the benefits that: | |
1400 | ||
1401 | 1. Mmapped space can ALWAYS be individually released back | |
1402 | to the system, which helps keep the system level memory | |
1403 | demands of a long-lived program low. | |
1404 | 2. Mapped memory can never become `locked' between | |
1405 | other chunks, as can happen with normally allocated chunks, which | |
1406 | means that even trimming via malloc_trim would not release them. | |
1407 | 3. On some systems with "holes" in address spaces, mmap can obtain | |
1408 | memory that sbrk cannot. | |
1409 | ||
1410 | However, it has the disadvantages that: | |
1411 | ||
1412 | 1. The space cannot be reclaimed, consolidated, and then | |
1413 | used to service later requests, as happens with normal chunks. | |
1414 | 2. It can lead to more wastage because of mmap page alignment | |
1415 | requirements | |
1416 | 3. It causes malloc performance to be more dependent on host | |
1417 | system memory management support routines which may vary in | |
1418 | implementation quality and may impose arbitrary | |
1419 | limitations. Generally, servicing a request via normal | |
1420 | malloc steps is faster than going through a system's mmap. | |
1421 | ||
1422 | The advantages of mmap nearly always outweigh disadvantages for | |
1423 | "large" chunks, but the value of "large" varies across systems. The | |
1424 | default is an empirically derived value that works well in most | |
1425 | systems. | |
1426 | */ | |
1427 | ||
1428 | #define M_MMAP_THRESHOLD -3 | |
1429 | ||
1430 | #ifndef DEFAULT_MMAP_THRESHOLD | |
1431 | #define DEFAULT_MMAP_THRESHOLD (128 * 1024) | |
1432 | #endif | |
1433 | ||
1434 | /* | |
1435 | M_MMAP_MAX is the maximum number of requests to simultaneously | |
1436 | service using mmap. This parameter exists because | |
1437 | . Some systems have a limited number of internal tables for | |
1438 | use by mmap, and using more than a few of them may degrade | |
1439 | performance. | |
1440 | ||
1441 | The default is set to a value that serves only as a safeguard. | |
1442 | Setting to 0 disables use of mmap for servicing large requests. If | |
1443 | HAVE_MMAP is not set, the default value is 0, and attempts to set it | |
1444 | to non-zero values in mallopt will fail. | |
1445 | */ | |
1446 | ||
1447 | #define M_MMAP_MAX -4 | |
1448 | ||
1449 | #ifndef DEFAULT_MMAP_MAX | |
1450 | #if HAVE_MMAP | |
1451 | #define DEFAULT_MMAP_MAX (65536) | |
1452 | #else | |
1453 | #define DEFAULT_MMAP_MAX (0) | |
1454 | #endif | |
1455 | #endif | |
1456 | ||
1457 | #ifdef __cplusplus | |
1458 | }; /* end of extern "C" */ | |
1459 | #endif | |
1460 | ||
1461 | /* | |
1462 | ======================================================================== | |
1463 | To make a fully customizable malloc.h header file, cut everything | |
1464 | above this line, put into file malloc.h, edit to suit, and #include it | |
1465 | on the next line, as well as in programs that use this malloc. | |
1466 | ======================================================================== | |
1467 | */ | |
1468 | ||
1469 | /* #include "malloc.h" */ | |
1470 | ||
1471 | /* --------------------- public wrappers ---------------------- */ | |
1472 | ||
1473 | #ifdef USE_PUBLIC_MALLOC_WRAPPERS | |
1474 | ||
1475 | /* Declare all routines as internal */ | |
1476 | #if __STD_C | |
1477 | static Void_t* mALLOc(size_t); | |
1478 | static void fREe(Void_t*); | |
1479 | static Void_t* rEALLOc(Void_t*, size_t); | |
1480 | static Void_t* mEMALIGn(size_t, size_t); | |
1481 | static Void_t* vALLOc(size_t); | |
1482 | static Void_t* pVALLOc(size_t); | |
1483 | static Void_t* cALLOc(size_t, size_t); | |
1484 | static Void_t** iCALLOc(size_t, size_t, Void_t**); | |
1485 | static Void_t** iCOMALLOc(size_t, size_t*, Void_t**); | |
1486 | static void cFREe(Void_t*); | |
1487 | static int mTRIm(size_t); | |
1488 | static size_t mUSABLe(Void_t*); | |
1489 | static void mSTATs(); | |
1490 | static int mALLOPt(int, int); | |
1491 | static struct mallinfo mALLINFo(void); | |
1492 | #else | |
1493 | static Void_t* mALLOc(); | |
1494 | static void fREe(); | |
1495 | static Void_t* rEALLOc(); | |
1496 | static Void_t* mEMALIGn(); | |
1497 | static Void_t* vALLOc(); | |
1498 | static Void_t* pVALLOc(); | |
1499 | static Void_t* cALLOc(); | |
1500 | static Void_t** iCALLOc(); | |
1501 | static Void_t** iCOMALLOc(); | |
1502 | static void cFREe(); | |
1503 | static int mTRIm(); | |
1504 | static size_t mUSABLe(); | |
1505 | static void mSTATs(); | |
1506 | static int mALLOPt(); | |
1507 | static struct mallinfo mALLINFo(); | |
1508 | #endif | |
1509 | ||
1510 | /* | |
1511 | MALLOC_PREACTION and MALLOC_POSTACTION should be | |
1512 | defined to return 0 on success, and nonzero on failure. | |
1513 | The return value of MALLOC_POSTACTION is currently ignored | |
1514 | in wrapper functions since there is no reasonable default | |
1515 | action to take on failure. | |
1516 | */ | |
1517 | ||
1518 | ||
1519 | #ifdef USE_MALLOC_LOCK | |
1520 | ||
1521 | #ifdef WIN32 | |
1522 | ||
1523 | static int mALLOC_MUTEx; | |
1524 | #define MALLOC_PREACTION slwait(&mALLOC_MUTEx) | |
1525 | #define MALLOC_POSTACTION slrelease(&mALLOC_MUTEx) | |
1526 | ||
1527 | #else | |
1528 | ||
1529 | #include <pthread.h> | |
1530 | ||
1531 | static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER; | |
1532 | ||
1533 | #define MALLOC_PREACTION pthread_mutex_lock(&mALLOC_MUTEx) | |
1534 | #define MALLOC_POSTACTION pthread_mutex_unlock(&mALLOC_MUTEx) | |
1535 | ||
1536 | #endif /* USE_MALLOC_LOCK */ | |
1537 | ||
1538 | #else | |
1539 | ||
1540 | /* Substitute anything you like for these */ | |
1541 | ||
1542 | #define MALLOC_PREACTION (0) | |
1543 | #define MALLOC_POSTACTION (0) | |
1544 | ||
1545 | #endif | |
1546 | ||
1547 | Void_t* public_mALLOc(size_t bytes) { | |
1548 | Void_t* m; | |
1549 | if (MALLOC_PREACTION != 0) { | |
1550 | return 0; | |
1551 | } | |
1552 | m = mALLOc(bytes); | |
1553 | if (MALLOC_POSTACTION != 0) { | |
1554 | } | |
1555 | return m; | |
1556 | } | |
1557 | ||
1558 | void public_fREe(Void_t* m) { | |
1559 | if (MALLOC_PREACTION != 0) { | |
1560 | return; | |
1561 | } | |
1562 | fREe(m); | |
1563 | if (MALLOC_POSTACTION != 0) { | |
1564 | } | |
1565 | } | |
1566 | ||
1567 | Void_t* public_rEALLOc(Void_t* m, size_t bytes) { | |
1568 | if (MALLOC_PREACTION != 0) { | |
1569 | return 0; | |
1570 | } | |
1571 | m = rEALLOc(m, bytes); | |
1572 | if (MALLOC_POSTACTION != 0) { | |
1573 | } | |
1574 | return m; | |
1575 | } | |
1576 | ||
1577 | Void_t* public_mEMALIGn(size_t alignment, size_t bytes) { | |
1578 | Void_t* m; | |
1579 | if (MALLOC_PREACTION != 0) { | |
1580 | return 0; | |
1581 | } | |
1582 | m = mEMALIGn(alignment, bytes); | |
1583 | if (MALLOC_POSTACTION != 0) { | |
1584 | } | |
1585 | return m; | |
1586 | } | |
1587 | ||
1588 | Void_t* public_vALLOc(size_t bytes) { | |
1589 | Void_t* m; | |
1590 | if (MALLOC_PREACTION != 0) { | |
1591 | return 0; | |
1592 | } | |
1593 | m = vALLOc(bytes); | |
1594 | if (MALLOC_POSTACTION != 0) { | |
1595 | } | |
1596 | return m; | |
1597 | } | |
1598 | ||
1599 | Void_t* public_pVALLOc(size_t bytes) { | |
1600 | Void_t* m; | |
1601 | if (MALLOC_PREACTION != 0) { | |
1602 | return 0; | |
1603 | } | |
1604 | m = pVALLOc(bytes); | |
1605 | if (MALLOC_POSTACTION != 0) { | |
1606 | } | |
1607 | return m; | |
1608 | } | |
1609 | ||
1610 | Void_t* public_cALLOc(size_t n, size_t elem_size) { | |
1611 | Void_t* m; | |
1612 | if (MALLOC_PREACTION != 0) { | |
1613 | return 0; | |
1614 | } | |
1615 | m = cALLOc(n, elem_size); | |
1616 | if (MALLOC_POSTACTION != 0) { | |
1617 | } | |
1618 | return m; | |
1619 | } | |
1620 | ||
1621 | ||
1622 | Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) { | |
1623 | Void_t** m; | |
1624 | if (MALLOC_PREACTION != 0) { | |
1625 | return 0; | |
1626 | } | |
1627 | m = iCALLOc(n, elem_size, chunks); | |
1628 | if (MALLOC_POSTACTION != 0) { | |
1629 | } | |
1630 | return m; | |
1631 | } | |
1632 | ||
1633 | Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) { | |
1634 | Void_t** m; | |
1635 | if (MALLOC_PREACTION != 0) { | |
1636 | return 0; | |
1637 | } | |
1638 | m = iCOMALLOc(n, sizes, chunks); | |
1639 | if (MALLOC_POSTACTION != 0) { | |
1640 | } | |
1641 | return m; | |
1642 | } | |
1643 | ||
1644 | void public_cFREe(Void_t* m) { | |
1645 | if (MALLOC_PREACTION != 0) { | |
1646 | return; | |
1647 | } | |
1648 | cFREe(m); | |
1649 | if (MALLOC_POSTACTION != 0) { | |
1650 | } | |
1651 | } | |
1652 | ||
1653 | int public_mTRIm(size_t s) { | |
1654 | int result; | |
1655 | if (MALLOC_PREACTION != 0) { | |
1656 | return 0; | |
1657 | } | |
1658 | result = mTRIm(s); | |
1659 | if (MALLOC_POSTACTION != 0) { | |
1660 | } | |
1661 | return result; | |
1662 | } | |
1663 | ||
1664 | size_t public_mUSABLe(Void_t* m) { | |
1665 | size_t result; | |
1666 | if (MALLOC_PREACTION != 0) { | |
1667 | return 0; | |
1668 | } | |
1669 | result = mUSABLe(m); | |
1670 | if (MALLOC_POSTACTION != 0) { | |
1671 | } | |
1672 | return result; | |
1673 | } | |
1674 | ||
1675 | void public_mSTATs() { | |
1676 | if (MALLOC_PREACTION != 0) { | |
1677 | return; | |
1678 | } | |
1679 | mSTATs(); | |
1680 | if (MALLOC_POSTACTION != 0) { | |
1681 | } | |
1682 | } | |
1683 | ||
1684 | struct mallinfo public_mALLINFo() { | |
1685 | struct mallinfo m; | |
1686 | if (MALLOC_PREACTION != 0) { | |
1687 | struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; | |
1688 | return nm; | |
1689 | } | |
1690 | m = mALLINFo(); | |
1691 | if (MALLOC_POSTACTION != 0) { | |
1692 | } | |
1693 | return m; | |
1694 | } | |
1695 | ||
1696 | int public_mALLOPt(int p, int v) { | |
1697 | int result; | |
1698 | if (MALLOC_PREACTION != 0) { | |
1699 | return 0; | |
1700 | } | |
1701 | result = mALLOPt(p, v); | |
1702 | if (MALLOC_POSTACTION != 0) { | |
1703 | } | |
1704 | return result; | |
1705 | } | |
1706 | ||
1707 | #endif | |
1708 | ||
1709 | ||
1710 | ||
1711 | /* ------------- Optional versions of memcopy ---------------- */ | |
1712 | ||
1713 | ||
1714 | #if USE_MEMCPY | |
1715 | ||
1716 | /* | |
1717 | Note: memcpy is ONLY invoked with non-overlapping regions, | |
1718 | so the (usually slower) memmove is not needed. | |
1719 | */ | |
1720 | ||
1721 | #define MALLOC_COPY(dest, src, nbytes) memcpy(dest, src, nbytes) | |
1722 | #define MALLOC_ZERO(dest, nbytes) memset(dest, 0, nbytes) | |
1723 | ||
1724 | #else /* !USE_MEMCPY */ | |
1725 | ||
1726 | /* Use Duff's device for good zeroing/copying performance. */ | |
1727 | ||
1728 | #define MALLOC_ZERO(charp, nbytes) \ | |
1729 | do { \ | |
1730 | INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \ | |
1731 | unsigned long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \ | |
1732 | long mcn; \ | |
1733 | if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ | |
1734 | switch (mctmp) { \ | |
1735 | case 0: for(;;) { *mzp++ = 0; \ | |
1736 | case 7: *mzp++ = 0; \ | |
1737 | case 6: *mzp++ = 0; \ | |
1738 | case 5: *mzp++ = 0; \ | |
1739 | case 4: *mzp++ = 0; \ | |
1740 | case 3: *mzp++ = 0; \ | |
1741 | case 2: *mzp++ = 0; \ | |
1742 | case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \ | |
1743 | } \ | |
1744 | } while(0) | |
1745 | ||
1746 | define MALLOC_COPY(dest,src,nbytes) \ | |
1747 | do { \ | |
1748 | INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \ | |
1749 | INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \ | |
1750 | unsigned long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \ | |
1751 | long mcn; \ | |
1752 | if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ | |
1753 | switch (mctmp) { \ | |
1754 | case 0: for(;;) { *mcdst++ = *mcsrc++; \ | |
1755 | case 7: *mcdst++ = *mcsrc++; \ | |
1756 | case 6: *mcdst++ = *mcsrc++; \ | |
1757 | case 5: *mcdst++ = *mcsrc++; \ | |
1758 | case 4: *mcdst++ = *mcsrc++; \ | |
1759 | case 3: *mcdst++ = *mcsrc++; \ | |
1760 | case 2: *mcdst++ = *mcsrc++; \ | |
1761 | case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \ | |
1762 | } \ | |
1763 | } while(0) | |
1764 | ||
1765 | #endif | |
1766 | ||
1767 | /* ------------------ MMAP support ------------------ */ | |
1768 | ||
1769 | ||
1770 | #if HAVE_MMAP | |
1771 | ||
1772 | #include <fcntl.h> | |
1773 | #ifndef LACKS_SYS_MMAN_H | |
1774 | #include <sys/mman.h> | |
1775 | #endif | |
1776 | ||
1777 | #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) | |
1778 | #define MAP_ANONYMOUS MAP_ANON | |
1779 | #endif | |
1780 | ||
1781 | /* | |
1782 | Nearly all versions of mmap support MAP_ANONYMOUS, | |
1783 | so the following is unlikely to be needed, but is | |
1784 | supplied just in case. | |
1785 | */ | |
1786 | ||
1787 | #ifndef MAP_ANONYMOUS | |
1788 | ||
1789 | static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ | |
1790 | ||
1791 | #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \ | |
1792 | (dev_zero_fd = open("/dev/zero", O_RDWR), \ | |
1793 | mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \ | |
1794 | mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) | |
1795 | ||
1796 | #else | |
1797 | ||
1798 | #define MMAP(addr, size, prot, flags) \ | |
1799 | (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0)) | |
1800 | ||
1801 | #endif | |
1802 | ||
1803 | ||
1804 | #endif /* HAVE_MMAP */ | |
1805 | ||
1806 | ||
1807 | /* | |
1808 | ----------------------- Chunk representations ----------------------- | |
1809 | */ | |
1810 | ||
1811 | ||
1812 | /* | |
1813 | This struct declaration is misleading (but accurate and necessary). | |
1814 | It declares a "view" into memory allowing access to necessary | |
1815 | fields at known offsets from a given base. See explanation below. | |
1816 | */ | |
1817 | ||
1818 | struct malloc_chunk { | |
1819 | ||
1820 | INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */ | |
1821 | INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */ | |
1822 | ||
1823 | struct malloc_chunk* fd; /* double links -- used only if free. */ | |
1824 | struct malloc_chunk* bk; | |
1825 | }; | |
1826 | ||
1827 | ||
1828 | typedef struct malloc_chunk* mchunkptr; | |
1829 | ||
1830 | /* | |
1831 | malloc_chunk details: | |
1832 | ||
1833 | (The following includes lightly edited explanations by Colin Plumb.) | |
1834 | ||
1835 | Chunks of memory are maintained using a `boundary tag' method as | |
1836 | described in e.g., Knuth or Standish. (See the paper by Paul | |
1837 | Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a | |
1838 | survey of such techniques.) Sizes of free chunks are stored both | |
1839 | in the front of each chunk and at the end. This makes | |
1840 | consolidating fragmented chunks into bigger chunks very fast. The | |
1841 | size fields also hold bits representing whether chunks are free or | |
1842 | in use. | |
1843 | ||
1844 | An allocated chunk looks like this: | |
1845 | ||
1846 | ||
1847 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1848 | | Size of previous chunk, if allocated | | | |
1849 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1850 | | Size of chunk, in bytes |P| | |
1851 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1852 | | User data starts here... . | |
1853 | . . | |
1854 | . (malloc_usable_space() bytes) . | |
1855 | . | | |
1856 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1857 | | Size of chunk | | |
1858 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1859 | ||
1860 | ||
1861 | Where "chunk" is the front of the chunk for the purpose of most of | |
1862 | the malloc code, but "mem" is the pointer that is returned to the | |
1863 | user. "Nextchunk" is the beginning of the next contiguous chunk. | |
1864 | ||
1865 | Chunks always begin on even word boundries, so the mem portion | |
1866 | (which is returned to the user) is also on an even word boundary, and | |
1867 | thus at least double-word aligned. | |
1868 | ||
1869 | Free chunks are stored in circular doubly-linked lists, and look like this: | |
1870 | ||
1871 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1872 | | Size of previous chunk | | |
1873 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1874 | `head:' | Size of chunk, in bytes |P| | |
1875 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1876 | | Forward pointer to next chunk in list | | |
1877 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1878 | | Back pointer to previous chunk in list | | |
1879 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1880 | | Unused space (may be 0 bytes long) . | |
1881 | . . | |
1882 | . | | |
1883 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1884 | `foot:' | Size of chunk, in bytes | | |
1885 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
1886 | ||
1887 | The P (PREV_INUSE) bit, stored in the unused low-order bit of the | |
1888 | chunk size (which is always a multiple of two words), is an in-use | |
1889 | bit for the *previous* chunk. If that bit is *clear*, then the | |
1890 | word before the current chunk size contains the previous chunk | |
1891 | size, and can be used to find the front of the previous chunk. | |
1892 | The very first chunk allocated always has this bit set, | |
1893 | preventing access to non-existent (or non-owned) memory. If | |
1894 | prev_inuse is set for any given chunk, then you CANNOT determine | |
1895 | the size of the previous chunk, and might even get a memory | |
1896 | addressing fault when trying to do so. | |
1897 | ||
1898 | Note that the `foot' of the current chunk is actually represented | |
1899 | as the prev_size of the NEXT chunk. This makes it easier to | |
1900 | deal with alignments etc but can be very confusing when trying | |
1901 | to extend or adapt this code. | |
1902 | ||
1903 | The two exceptions to all this are | |
1904 | ||
1905 | 1. The special chunk `top' doesn't bother using the | |
1906 | trailing size field since there is no next contiguous chunk | |
1907 | that would have to index off it. After initialization, `top' | |
1908 | is forced to always exist. If it would become less than | |
1909 | MINSIZE bytes long, it is replenished. | |
1910 | ||
1911 | 2. Chunks allocated via mmap, which have the second-lowest-order | |
1912 | bit (IS_MMAPPED) set in their size fields. Because they are | |
1913 | allocated one-by-one, each must contain its own trailing size field. | |
1914 | ||
1915 | */ | |
1916 | ||
1917 | /* | |
1918 | ---------- Size and alignment checks and conversions ---------- | |
1919 | */ | |
1920 | ||
1921 | /* conversion from malloc headers to user pointers, and back */ | |
1922 | ||
1923 | #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ)) | |
1924 | #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ)) | |
1925 | ||
1926 | /* The smallest possible chunk */ | |
1927 | #define MIN_CHUNK_SIZE (sizeof(struct malloc_chunk)) | |
1928 | ||
1929 | /* The smallest size we can malloc is an aligned minimal chunk */ | |
1930 | ||
1931 | #define MINSIZE \ | |
1932 | (unsigned long)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)) | |
1933 | ||
1934 | /* Check if m has acceptable alignment */ | |
1935 | ||
1936 | #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0) | |
1937 | ||
1938 | ||
1939 | /* | |
1940 | Check if a request is so large that it would wrap around zero when | |
1941 | padded and aligned. To simplify some other code, the bound is made | |
1942 | low enough so that adding MINSIZE will also not wrap around sero. | |
1943 | */ | |
1944 | ||
1945 | #define REQUEST_OUT_OF_RANGE(req) \ | |
1946 | ((unsigned long)(req) >= \ | |
1947 | (unsigned long)(INTERNAL_SIZE_T)(-2 * MINSIZE)) | |
1948 | ||
1949 | /* pad request bytes into a usable size -- internal version */ | |
1950 | ||
1951 | #define request2size(req) \ | |
1952 | (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \ | |
1953 | MINSIZE : \ | |
1954 | ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK) | |
1955 | ||
1956 | /* Same, except also perform argument check */ | |
1957 | ||
1958 | #define checked_request2size(req, sz) \ | |
1959 | if (REQUEST_OUT_OF_RANGE(req)) { \ | |
1960 | MALLOC_FAILURE_ACTION; \ | |
1961 | return 0; \ | |
1962 | } \ | |
1963 | (sz) = request2size(req); | |
1964 | ||
1965 | /* | |
1966 | --------------- Physical chunk operations --------------- | |
1967 | */ | |
1968 | ||
1969 | ||
1970 | /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */ | |
1971 | #define PREV_INUSE 0x1 | |
1972 | ||
1973 | /* extract inuse bit of previous chunk */ | |
1974 | #define prev_inuse(p) ((p)->size & PREV_INUSE) | |
1975 | ||
1976 | ||
1977 | /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */ | |
1978 | #define IS_MMAPPED 0x2 | |
1979 | ||
1980 | /* check for mmap()'ed chunk */ | |
1981 | #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED) | |
1982 | ||
1983 | /* | |
1984 | Bits to mask off when extracting size | |
1985 | ||
1986 | Note: IS_MMAPPED is intentionally not masked off from size field in | |
1987 | macros for which mmapped chunks should never be seen. This should | |
1988 | cause helpful core dumps to occur if it is tried by accident by | |
1989 | people extending or adapting this malloc. | |
1990 | */ | |
1991 | #define SIZE_BITS (PREV_INUSE|IS_MMAPPED) | |
1992 | ||
1993 | /* Get size, ignoring use bits */ | |
1994 | #define chunksize(p) ((p)->size & ~(SIZE_BITS)) | |
1995 | ||
1996 | ||
1997 | /* Ptr to next physical malloc_chunk. */ | |
1998 | #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) )) | |
1999 | ||
2000 | /* Ptr to previous physical malloc_chunk */ | |
2001 | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) )) | |
2002 | ||
2003 | /* Treat space at ptr + offset as a chunk */ | |
2004 | #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) | |
2005 | ||
2006 | /* extract p's inuse bit */ | |
2007 | #define inuse(p)\ | |
2008 | ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE) | |
2009 | ||
2010 | /* set/clear chunk as being inuse without otherwise disturbing */ | |
2011 | #define set_inuse(p)\ | |
2012 | ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE | |
2013 | ||
2014 | #define clear_inuse(p)\ | |
2015 | ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE) | |
2016 | ||
2017 | ||
2018 | /* check/set/clear inuse bits in known places */ | |
2019 | #define inuse_bit_at_offset(p, s)\ | |
2020 | (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE) | |
2021 | ||
2022 | #define set_inuse_bit_at_offset(p, s)\ | |
2023 | (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE) | |
2024 | ||
2025 | #define clear_inuse_bit_at_offset(p, s)\ | |
2026 | (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE)) | |
2027 | ||
2028 | ||
2029 | /* Set size at head, without disturbing its use bit */ | |
2030 | #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s))) | |
2031 | ||
2032 | /* Set size/use field */ | |
2033 | #define set_head(p, s) ((p)->size = (s)) | |
2034 | ||
2035 | /* Set size at footer (only when chunk is not in use) */ | |
2036 | #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s)) | |
2037 | ||
2038 | ||
2039 | /* | |
2040 | -------------------- Internal data structures -------------------- | |
2041 | ||
2042 | All internal state is held in an instance of malloc_state defined | |
2043 | below. There are no other static variables, except in two optional | |
2044 | cases: | |
2045 | * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above. | |
2046 | * If HAVE_MMAP is true, but mmap doesn't support | |
2047 | MAP_ANONYMOUS, a dummy file descriptor for mmap. | |
2048 | ||
2049 | Beware of lots of tricks that minimize the total bookkeeping space | |
2050 | requirements. The result is a little over 1K bytes (for 4byte | |
2051 | pointers and size_t.) | |
2052 | */ | |
2053 | ||
2054 | /* | |
2055 | Bins | |
2056 | ||
2057 | An array of bin headers for free chunks. Each bin is doubly | |
2058 | linked. The bins are approximately proportionally (log) spaced. | |
2059 | There are a lot of these bins (128). This may look excessive, but | |
2060 | works very well in practice. Most bins hold sizes that are | |
2061 | unusual as malloc request sizes, but are more usual for fragments | |
2062 | and consolidated sets of chunks, which is what these bins hold, so | |
2063 | they can be found quickly. All procedures maintain the invariant | |
2064 | that no consolidated chunk physically borders another one, so each | |
2065 | chunk in a list is known to be preceeded and followed by either | |
2066 | inuse chunks or the ends of memory. | |
2067 | ||
2068 | Chunks in bins are kept in size order, with ties going to the | |
2069 | approximately least recently used chunk. Ordering isn't needed | |
2070 | for the small bins, which all contain the same-sized chunks, but | |
2071 | facilitates best-fit allocation for larger chunks. These lists | |
2072 | are just sequential. Keeping them in order almost never requires | |
2073 | enough traversal to warrant using fancier ordered data | |
2074 | structures. | |
2075 | ||
2076 | Chunks of the same size are linked with the most | |
2077 | recently freed at the front, and allocations are taken from the | |
2078 | back. This results in LRU (FIFO) allocation order, which tends | |
2079 | to give each chunk an equal opportunity to be consolidated with | |
2080 | adjacent freed chunks, resulting in larger free chunks and less | |
2081 | fragmentation. | |
2082 | ||
2083 | To simplify use in double-linked lists, each bin header acts | |
2084 | as a malloc_chunk. This avoids special-casing for headers. | |
2085 | But to conserve space and improve locality, we allocate | |
2086 | only the fd/bk pointers of bins, and then use repositioning tricks | |
2087 | to treat these as the fields of a malloc_chunk*. | |
2088 | */ | |
2089 | ||
2090 | typedef struct malloc_chunk* mbinptr; | |
2091 | ||
2092 | /* addressing -- note that bin_at(0) does not exist */ | |
2093 | #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1))) | |
2094 | ||
2095 | /* analog of ++bin */ | |
2096 | #define next_bin(b) ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1))) | |
2097 | ||
2098 | /* Reminders about list directionality within bins */ | |
2099 | #define first(b) ((b)->fd) | |
2100 | #define last(b) ((b)->bk) | |
2101 | ||
2102 | /* Take a chunk off a bin list */ | |
2103 | #define unlink(P, BK, FD) { \ | |
2104 | FD = P->fd; \ | |
2105 | BK = P->bk; \ | |
2106 | FD->bk = BK; \ | |
2107 | BK->fd = FD; \ | |
2108 | } | |
2109 | ||
2110 | /* | |
2111 | Indexing | |
2112 | ||
2113 | Bins for sizes < 512 bytes contain chunks of all the same size, spaced | |
2114 | 8 bytes apart. Larger bins are approximately logarithmically spaced: | |
2115 | ||
2116 | 64 bins of size 8 | |
2117 | 32 bins of size 64 | |
2118 | 16 bins of size 512 | |
2119 | 8 bins of size 4096 | |
2120 | 4 bins of size 32768 | |
2121 | 2 bins of size 262144 | |
2122 | 1 bin of size what's left | |
2123 | ||
2124 | There is actually a little bit of slop in the numbers in bin_index | |
2125 | for the sake of speed. This makes no difference elsewhere. | |
2126 | ||
2127 | The bins top out around 1MB because we expect to service large | |
2128 | requests via mmap. | |
2129 | */ | |
2130 | ||
2131 | #define NBINS 128 | |
2132 | #define NSMALLBINS 64 | |
2133 | #define SMALLBIN_WIDTH 8 | |
2134 | #define MIN_LARGE_SIZE 512 | |
2135 | ||
2136 | #define in_smallbin_range(sz) \ | |
2137 | ((unsigned long)(sz) < (unsigned long)MIN_LARGE_SIZE) | |
2138 | ||
2139 | #define smallbin_index(sz) (((unsigned)(sz)) >> 3) | |
2140 | ||
2141 | #define largebin_index(sz) \ | |
2142 | (((((unsigned long)(sz)) >> 6) <= 32)? 56 + (((unsigned long)(sz)) >> 6): \ | |
2143 | ((((unsigned long)(sz)) >> 9) <= 20)? 91 + (((unsigned long)(sz)) >> 9): \ | |
2144 | ((((unsigned long)(sz)) >> 12) <= 10)? 110 + (((unsigned long)(sz)) >> 12): \ | |
2145 | ((((unsigned long)(sz)) >> 15) <= 4)? 119 + (((unsigned long)(sz)) >> 15): \ | |
2146 | ((((unsigned long)(sz)) >> 18) <= 2)? 124 + (((unsigned long)(sz)) >> 18): \ | |
2147 | 126) | |
2148 | ||
2149 | #define bin_index(sz) \ | |
2150 | ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz)) | |
2151 | ||
2152 | ||
2153 | /* | |
2154 | Unsorted chunks | |
2155 | ||
2156 | All remainders from chunk splits, as well as all returned chunks, | |
2157 | are first placed in the "unsorted" bin. They are then placed | |
2158 | in regular bins after malloc gives them ONE chance to be used before | |
2159 | binning. So, basically, the unsorted_chunks list acts as a queue, | |
2160 | with chunks being placed on it in free (and malloc_consolidate), | |
2161 | and taken off (to be either used or placed in bins) in malloc. | |
2162 | */ | |
2163 | ||
2164 | /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */ | |
2165 | #define unsorted_chunks(M) (bin_at(M, 1)) | |
2166 | ||
2167 | /* | |
2168 | Top | |
2169 | ||
2170 | The top-most available chunk (i.e., the one bordering the end of | |
2171 | available memory) is treated specially. It is never included in | |
2172 | any bin, is used only if no other chunk is available, and is | |
2173 | released back to the system if it is very large (see | |
2174 | M_TRIM_THRESHOLD). Because top initially | |
2175 | points to its own bin with initial zero size, thus forcing | |
2176 | extension on the first malloc request, we avoid having any special | |
2177 | code in malloc to check whether it even exists yet. But we still | |
2178 | need to do so when getting memory from system, so we make | |
2179 | initial_top treat the bin as a legal but unusable chunk during the | |
2180 | interval between initialization and the first call to | |
2181 | sYSMALLOc. (This is somewhat delicate, since it relies on | |
2182 | the 2 preceding words to be zero during this interval as well.) | |
2183 | */ | |
2184 | ||
2185 | /* Conveniently, the unsorted bin can be used as dummy top on first call */ | |
2186 | #define initial_top(M) (unsorted_chunks(M)) | |
2187 | ||
2188 | /* | |
2189 | Binmap | |
2190 | ||
2191 | To help compensate for the large number of bins, a one-level index | |
2192 | structure is used for bin-by-bin searching. `binmap' is a | |
2193 | bitvector recording whether bins are definitely empty so they can | |
2194 | be skipped over during during traversals. The bits are NOT always | |
2195 | cleared as soon as bins are empty, but instead only | |
2196 | when they are noticed to be empty during traversal in malloc. | |
2197 | */ | |
2198 | ||
2199 | /* Conservatively use 32 bits per map word, even if on 64bit system */ | |
2200 | #define BINMAPSHIFT 5 | |
2201 | #define BITSPERMAP (1U << BINMAPSHIFT) | |
2202 | #define BINMAPSIZE (NBINS / BITSPERMAP) | |
2203 | ||
2204 | #define idx2block(i) ((i) >> BINMAPSHIFT) | |
2205 | #define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT)-1)))) | |
2206 | ||
2207 | #define mark_bin(m,i) ((m)->binmap[idx2block(i)] |= idx2bit(i)) | |
2208 | #define unmark_bin(m,i) ((m)->binmap[idx2block(i)] &= ~(idx2bit(i))) | |
2209 | #define get_binmap(m,i) ((m)->binmap[idx2block(i)] & idx2bit(i)) | |
2210 | ||
2211 | /* | |
2212 | Fastbins | |
2213 | ||
2214 | An array of lists holding recently freed small chunks. Fastbins | |
2215 | are not doubly linked. It is faster to single-link them, and | |
2216 | since chunks are never removed from the middles of these lists, | |
2217 | double linking is not necessary. Also, unlike regular bins, they | |
2218 | are not even processed in FIFO order (they use faster LIFO) since | |
2219 | ordering doesn't much matter in the transient contexts in which | |
2220 | fastbins are normally used. | |
2221 | ||
2222 | Chunks in fastbins keep their inuse bit set, so they cannot | |
2223 | be consolidated with other free chunks. malloc_consolidate | |
2224 | releases all chunks in fastbins and consolidates them with | |
2225 | other free chunks. | |
2226 | */ | |
2227 | ||
2228 | typedef struct malloc_chunk* mfastbinptr; | |
2229 | ||
2230 | /* offset 2 to use otherwise unindexable first 2 bins */ | |
2231 | #define fastbin_index(sz) ((((unsigned int)(sz)) >> 3) - 2) | |
2232 | ||
2233 | /* The maximum fastbin request size we support */ | |
2234 | #define MAX_FAST_SIZE 80 | |
2235 | ||
2236 | #define NFASTBINS (fastbin_index(request2size(MAX_FAST_SIZE))+1) | |
2237 | ||
2238 | /* | |
2239 | FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free() | |
2240 | that triggers automatic consolidation of possibly-surrounding | |
2241 | fastbin chunks. This is a heuristic, so the exact value should not | |
2242 | matter too much. It is defined at half the default trim threshold as a | |
2243 | compromise heuristic to only attempt consolidation if it is likely | |
2244 | to lead to trimming. However, it is not dynamically tunable, since | |
2245 | consolidation reduces fragmentation surrounding loarge chunks even | |
2246 | if trimming is not used. | |
2247 | */ | |
2248 | ||
2249 | #define FASTBIN_CONSOLIDATION_THRESHOLD (65536UL) | |
2250 | ||
2251 | /* | |
2252 | Since the lowest 2 bits in max_fast don't matter in size comparisons, | |
2253 | they are used as flags. | |
2254 | */ | |
2255 | ||
2256 | /* | |
2257 | FASTCHUNKS_BIT held in max_fast indicates that there are probably | |
2258 | some fastbin chunks. It is set true on entering a chunk into any | |
2259 | fastbin, and cleared only in malloc_consolidate. | |
2260 | ||
2261 | The truth value is inverted so that have_fastchunks will be true | |
2262 | upon startup (since statics are zero-filled), simplifying | |
2263 | initialization checks. | |
2264 | */ | |
2265 | ||
2266 | #define FASTCHUNKS_BIT (1U) | |
2267 | ||
2268 | #define have_fastchunks(M) (((M)->max_fast & FASTCHUNKS_BIT) == 0) | |
2269 | #define clear_fastchunks(M) ((M)->max_fast |= FASTCHUNKS_BIT) | |
2270 | #define set_fastchunks(M) ((M)->max_fast &= ~FASTCHUNKS_BIT) | |
2271 | ||
2272 | /* | |
2273 | NONCONTIGUOUS_BIT indicates that MORECORE does not return contiguous | |
2274 | regions. Otherwise, contiguity is exploited in merging together, | |
2275 | when possible, results from consecutive MORECORE calls. | |
2276 | ||
2277 | The initial value comes from MORECORE_CONTIGUOUS, but is | |
2278 | changed dynamically if mmap is ever used as an sbrk substitute. | |
2279 | */ | |
2280 | ||
2281 | #define NONCONTIGUOUS_BIT (2U) | |
2282 | ||
2283 | #define contiguous(M) (((M)->max_fast & NONCONTIGUOUS_BIT) == 0) | |
2284 | #define noncontiguous(M) (((M)->max_fast & NONCONTIGUOUS_BIT) != 0) | |
2285 | #define set_noncontiguous(M) ((M)->max_fast |= NONCONTIGUOUS_BIT) | |
2286 | #define set_contiguous(M) ((M)->max_fast &= ~NONCONTIGUOUS_BIT) | |
2287 | ||
2288 | /* | |
2289 | Set value of max_fast. | |
2290 | Use impossibly small value if 0. | |
2291 | Precondition: there are no existing fastbin chunks. | |
2292 | Setting the value clears fastchunk bit but preserves noncontiguous bit. | |
2293 | */ | |
2294 | ||
2295 | #define set_max_fast(M, s) \ | |
2296 | (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \ | |
2297 | FASTCHUNKS_BIT | \ | |
2298 | ((M)->max_fast & NONCONTIGUOUS_BIT) | |
2299 | ||
2300 | ||
2301 | /* | |
2302 | ----------- Internal state representation and initialization ----------- | |
2303 | */ | |
2304 | ||
2305 | struct malloc_state { | |
2306 | ||
2307 | /* The maximum chunk size to be eligible for fastbin */ | |
2308 | INTERNAL_SIZE_T max_fast; /* low 2 bits used as flags */ | |
2309 | ||
2310 | /* Fastbins */ | |
2311 | mfastbinptr fastbins[NFASTBINS]; | |
2312 | ||
2313 | /* Base of the topmost chunk -- not otherwise kept in a bin */ | |
2314 | mchunkptr top; | |
2315 | ||
2316 | /* The remainder from the most recent split of a small request */ | |
2317 | mchunkptr last_remainder; | |
2318 | ||
2319 | /* Normal bins packed as described above */ | |
2320 | mchunkptr bins[NBINS * 2]; | |
2321 | ||
2322 | /* Bitmap of bins */ | |
2323 | unsigned int binmap[BINMAPSIZE]; | |
2324 | ||
2325 | /* Tunable parameters */ | |
2326 | unsigned long trim_threshold; | |
2327 | INTERNAL_SIZE_T top_pad; | |
2328 | INTERNAL_SIZE_T mmap_threshold; | |
2329 | ||
2330 | /* Memory map support */ | |
2331 | int n_mmaps; | |
2332 | int n_mmaps_max; | |
2333 | int max_n_mmaps; | |
2334 | ||
2335 | /* Cache malloc_getpagesize */ | |
2336 | unsigned int pagesize; | |
2337 | ||
2338 | /* Statistics */ | |
2339 | INTERNAL_SIZE_T mmapped_mem; | |
2340 | INTERNAL_SIZE_T sbrked_mem; | |
2341 | INTERNAL_SIZE_T max_sbrked_mem; | |
2342 | INTERNAL_SIZE_T max_mmapped_mem; | |
2343 | INTERNAL_SIZE_T max_total_mem; | |
2344 | }; | |
2345 | ||
2346 | typedef struct malloc_state *mstate; | |
2347 | ||
2348 | /* | |
2349 | There is exactly one instance of this struct in this malloc. | |
2350 | If you are adapting this malloc in a way that does NOT use a static | |
2351 | malloc_state, you MUST explicitly zero-fill it before using. This | |
2352 | malloc relies on the property that malloc_state is initialized to | |
2353 | all zeroes (as is true of C statics). | |
2354 | */ | |
2355 | ||
2356 | static struct malloc_state av_; /* never directly referenced */ | |
2357 | ||
2358 | /* | |
2359 | All uses of av_ are via get_malloc_state(). | |
2360 | At most one "call" to get_malloc_state is made per invocation of | |
2361 | the public versions of malloc and free, but other routines | |
2362 | that in turn invoke malloc and/or free may call more then once. | |
2363 | Also, it is called in check* routines if DEBUG is set. | |
2364 | */ | |
2365 | ||
2366 | #define get_malloc_state() (&(av_)) | |
2367 | ||
2368 | /* | |
2369 | Initialize a malloc_state struct. | |
2370 | ||
2371 | This is called only from within malloc_consolidate, which needs | |
2372 | be called in the same contexts anyway. It is never called directly | |
2373 | outside of malloc_consolidate because some optimizing compilers try | |
2374 | to inline it at all call points, which turns out not to be an | |
2375 | optimization at all. (Inlining it in malloc_consolidate is fine though.) | |
2376 | */ | |
2377 | ||
2378 | #if __STD_C | |
2379 | static void malloc_init_state(mstate av) | |
2380 | #else | |
2381 | static void malloc_init_state(av) mstate av; | |
2382 | #endif | |
2383 | { | |
2384 | int i; | |
2385 | mbinptr bin; | |
2386 | ||
2387 | /* Establish circular links for normal bins */ | |
2388 | for (i = 1; i < NBINS; ++i) { | |
2389 | bin = bin_at(av,i); | |
2390 | bin->fd = bin->bk = bin; | |
2391 | } | |
2392 | ||
2393 | av->top_pad = DEFAULT_TOP_PAD; | |
2394 | av->n_mmaps_max = DEFAULT_MMAP_MAX; | |
2395 | av->mmap_threshold = DEFAULT_MMAP_THRESHOLD; | |
2396 | av->trim_threshold = DEFAULT_TRIM_THRESHOLD; | |
2397 | ||
2398 | #if !MORECORE_CONTIGUOUS | |
2399 | set_noncontiguous(av); | |
2400 | #endif | |
2401 | ||
2402 | set_max_fast(av, DEFAULT_MXFAST); | |
2403 | ||
2404 | av->top = initial_top(av); | |
2405 | av->pagesize = malloc_getpagesize; | |
2406 | } | |
2407 | ||
2408 | /* | |
2409 | Other internal utilities operating on mstates | |
2410 | */ | |
2411 | ||
2412 | #if __STD_C | |
2413 | static Void_t* sYSMALLOc(INTERNAL_SIZE_T, mstate); | |
2414 | static int sYSTRIm(size_t, mstate); | |
2415 | static void malloc_consolidate(mstate); | |
2416 | static Void_t** iALLOc(size_t, size_t*, int, Void_t**); | |
2417 | #else | |
2418 | static Void_t* sYSMALLOc(); | |
2419 | static int sYSTRIm(); | |
2420 | static void malloc_consolidate(); | |
2421 | static Void_t** iALLOc(); | |
2422 | #endif | |
2423 | ||
2424 | /* | |
2425 | Debugging support | |
2426 | ||
2427 | These routines make a number of assertions about the states | |
2428 | of data structures that should be true at all times. If any | |
2429 | are not true, it's very likely that a user program has somehow | |
2430 | trashed memory. (It's also possible that there is a coding error | |
2431 | in malloc. In which case, please report it!) | |
2432 | */ | |
2433 | ||
2434 | #if ! DEBUG | |
2435 | ||
2436 | #define check_chunk(P) | |
2437 | #define check_free_chunk(P) | |
2438 | #define check_inuse_chunk(P) | |
2439 | #define check_remalloced_chunk(P,N) | |
2440 | #define check_malloced_chunk(P,N) | |
2441 | #define check_malloc_state() | |
2442 | ||
2443 | #else | |
2444 | #define check_chunk(P) do_check_chunk(P) | |
2445 | #define check_free_chunk(P) do_check_free_chunk(P) | |
2446 | #define check_inuse_chunk(P) do_check_inuse_chunk(P) | |
2447 | #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N) | |
2448 | #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N) | |
2449 | #define check_malloc_state() do_check_malloc_state() | |
2450 | ||
2451 | /* | |
2452 | Properties of all chunks | |
2453 | */ | |
2454 | ||
2455 | #if __STD_C | |
2456 | static void do_check_chunk(mchunkptr p) | |
2457 | #else | |
2458 | static void do_check_chunk(p) mchunkptr p; | |
2459 | #endif | |
2460 | { | |
2461 | mstate av = get_malloc_state(); | |
2462 | unsigned long sz = chunksize(p); | |
2463 | /* min and max possible addresses assuming contiguous allocation */ | |
2464 | char* max_address = (char*)(av->top) + chunksize(av->top); | |
2465 | char* min_address = max_address - av->sbrked_mem; | |
2466 | ||
2467 | if (!chunk_is_mmapped(p)) { | |
2468 | ||
2469 | /* Has legal address ... */ | |
2470 | if (p != av->top) { | |
2471 | if (contiguous(av)) { | |
2472 | assert(((char*)p) >= min_address); | |
2473 | assert(((char*)p + sz) <= ((char*)(av->top))); | |
2474 | } | |
2475 | } | |
2476 | else { | |
2477 | /* top size is always at least MINSIZE */ | |
2478 | assert((unsigned long)(sz) >= MINSIZE); | |
2479 | /* top predecessor always marked inuse */ | |
2480 | assert(prev_inuse(p)); | |
2481 | } | |
2482 | ||
2483 | } | |
2484 | else { | |
2485 | #if HAVE_MMAP | |
2486 | /* address is outside main heap */ | |
2487 | if (contiguous(av) && av->top != initial_top(av)) { | |
2488 | assert(((char*)p) < min_address || ((char*)p) > max_address); | |
2489 | } | |
2490 | /* chunk is page-aligned */ | |
2491 | assert(((p->prev_size + sz) & (av->pagesize-1)) == 0); | |
2492 | /* mem is aligned */ | |
2493 | assert(aligned_OK(chunk2mem(p))); | |
2494 | #else | |
2495 | /* force an appropriate assert violation if debug set */ | |
2496 | assert(!chunk_is_mmapped(p)); | |
2497 | #endif | |
2498 | } | |
2499 | } | |
2500 | ||
2501 | /* | |
2502 | Properties of free chunks | |
2503 | */ | |
2504 | ||
2505 | #if __STD_C | |
2506 | static void do_check_free_chunk(mchunkptr p) | |
2507 | #else | |
2508 | static void do_check_free_chunk(p) mchunkptr p; | |
2509 | #endif | |
2510 | { | |
2511 | mstate av = get_malloc_state(); | |
2512 | ||
2513 | INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; | |
2514 | mchunkptr next = chunk_at_offset(p, sz); | |
2515 | ||
2516 | do_check_chunk(p); | |
2517 | ||
2518 | /* Chunk must claim to be free ... */ | |
2519 | assert(!inuse(p)); | |
2520 | assert (!chunk_is_mmapped(p)); | |
2521 | ||
2522 | /* Unless a special marker, must have OK fields */ | |
2523 | if ((unsigned long)(sz) >= MINSIZE) | |
2524 | { | |
2525 | assert((sz & MALLOC_ALIGN_MASK) == 0); | |
2526 | assert(aligned_OK(chunk2mem(p))); | |
2527 | /* ... matching footer field */ | |
2528 | assert(next->prev_size == sz); | |
2529 | /* ... and is fully consolidated */ | |
2530 | assert(prev_inuse(p)); | |
2531 | assert (next == av->top || inuse(next)); | |
2532 | ||
2533 | /* ... and has minimally sane links */ | |
2534 | assert(p->fd->bk == p); | |
2535 | assert(p->bk->fd == p); | |
2536 | } | |
2537 | else /* markers are always of size SIZE_SZ */ | |
2538 | assert(sz == SIZE_SZ); | |
2539 | } | |
2540 | ||
2541 | /* | |
2542 | Properties of inuse chunks | |
2543 | */ | |
2544 | ||
2545 | #if __STD_C | |
2546 | static void do_check_inuse_chunk(mchunkptr p) | |
2547 | #else | |
2548 | static void do_check_inuse_chunk(p) mchunkptr p; | |
2549 | #endif | |
2550 | { | |
2551 | mstate av = get_malloc_state(); | |
2552 | mchunkptr next; | |
2553 | do_check_chunk(p); | |
2554 | ||
2555 | if (chunk_is_mmapped(p)) | |
2556 | return; /* mmapped chunks have no next/prev */ | |
2557 | ||
2558 | /* Check whether it claims to be in use ... */ | |
2559 | assert(inuse(p)); | |
2560 | ||
2561 | next = next_chunk(p); | |
2562 | ||
2563 | /* ... and is surrounded by OK chunks. | |
2564 | Since more things can be checked with free chunks than inuse ones, | |
2565 | if an inuse chunk borders them and debug is on, it's worth doing them. | |
2566 | */ | |
2567 | if (!prev_inuse(p)) { | |
2568 | /* Note that we cannot even look at prev unless it is not inuse */ | |
2569 | mchunkptr prv = prev_chunk(p); | |
2570 | assert(next_chunk(prv) == p); | |
2571 | do_check_free_chunk(prv); | |
2572 | } | |
2573 | ||
2574 | if (next == av->top) { | |
2575 | assert(prev_inuse(next)); | |
2576 | assert(chunksize(next) >= MINSIZE); | |
2577 | } | |
2578 | else if (!inuse(next)) | |
2579 | do_check_free_chunk(next); | |
2580 | } | |
2581 | ||
2582 | /* | |
2583 | Properties of chunks recycled from fastbins | |
2584 | */ | |
2585 | ||
2586 | #if __STD_C | |
2587 | static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) | |
2588 | #else | |
2589 | static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s; | |
2590 | #endif | |
2591 | { | |
2592 | INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; | |
2593 | ||
2594 | do_check_inuse_chunk(p); | |
2595 | ||
2596 | /* Legal size ... */ | |
2597 | assert((sz & MALLOC_ALIGN_MASK) == 0); | |
2598 | assert((unsigned long)(sz) >= MINSIZE); | |
2599 | /* ... and alignment */ | |
2600 | assert(aligned_OK(chunk2mem(p))); | |
2601 | /* chunk is less than MINSIZE more than request */ | |
2602 | assert((long)(sz) - (long)(s) >= 0); | |
2603 | assert((long)(sz) - (long)(s + MINSIZE) < 0); | |
2604 | } | |
2605 | ||
2606 | /* | |
2607 | Properties of nonrecycled chunks at the point they are malloced | |
2608 | */ | |
2609 | ||
2610 | #if __STD_C | |
2611 | static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) | |
2612 | #else | |
2613 | static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s; | |
2614 | #endif | |
2615 | { | |
2616 | /* same as recycled case ... */ | |
2617 | do_check_remalloced_chunk(p, s); | |
2618 | ||
2619 | /* | |
2620 | ... plus, must obey implementation invariant that prev_inuse is | |
2621 | always true of any allocated chunk; i.e., that each allocated | |
2622 | chunk borders either a previously allocated and still in-use | |
2623 | chunk, or the base of its memory arena. This is ensured | |
2624 | by making all allocations from the the `lowest' part of any found | |
2625 | chunk. This does not necessarily hold however for chunks | |
2626 | recycled via fastbins. | |
2627 | */ | |
2628 | ||
2629 | assert(prev_inuse(p)); | |
2630 | } | |
2631 | ||
2632 | ||
2633 | /* | |
2634 | Properties of malloc_state. | |
2635 | ||
2636 | This may be useful for debugging malloc, as well as detecting user | |
2637 | programmer errors that somehow write into malloc_state. | |
2638 | ||
2639 | If you are extending or experimenting with this malloc, you can | |
2640 | probably figure out how to hack this routine to print out or | |
2641 | display chunk addresses, sizes, bins, and other instrumentation. | |
2642 | */ | |
2643 | ||
2644 | static void do_check_malloc_state(void) | |
2645 | { | |
2646 | mstate av = get_malloc_state(); | |
2647 | int i; | |
2648 | mchunkptr p; | |
2649 | mchunkptr q; | |
2650 | mbinptr b; | |
2651 | unsigned int binbit; | |
2652 | int empty; | |
2653 | unsigned int idx; | |
2654 | INTERNAL_SIZE_T size; | |
2655 | unsigned long total = 0; | |
2656 | int max_fast_bin; | |
2657 | ||
2658 | /* internal size_t must be no wider than pointer type */ | |
2659 | assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*)); | |
2660 | ||
2661 | /* alignment is a power of 2 */ | |
2662 | assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0); | |
2663 | ||
2664 | /* cannot run remaining checks until fully initialized */ | |
2665 | if (av->top == 0 || av->top == initial_top(av)) | |
2666 | return; | |
2667 | ||
2668 | /* pagesize is a power of 2 */ | |
2669 | assert((av->pagesize & (av->pagesize-1)) == 0); | |
2670 | ||
2671 | /* properties of fastbins */ | |
2672 | ||
2673 | /* max_fast is in allowed range */ | |
2674 | assert((av->max_fast & ~1) <= request2size(MAX_FAST_SIZE)); | |
2675 | ||
2676 | max_fast_bin = fastbin_index(av->max_fast); | |
2677 | ||
2678 | for (i = 0; i < NFASTBINS; ++i) { | |
2679 | p = av->fastbins[i]; | |
2680 | ||
2681 | /* all bins past max_fast are empty */ | |
2682 | if (i > max_fast_bin) | |
2683 | assert(p == 0); | |
2684 | ||
2685 | while (p != 0) { | |
2686 | /* each chunk claims to be inuse */ | |
2687 | do_check_inuse_chunk(p); | |
2688 | total += chunksize(p); | |
2689 | /* chunk belongs in this bin */ | |
2690 | assert(fastbin_index(chunksize(p)) == i); | |
2691 | p = p->fd; | |
2692 | } | |
2693 | } | |
2694 | ||
2695 | if (total != 0) | |
2696 | assert(have_fastchunks(av)); | |
2697 | else if (!have_fastchunks(av)) | |
2698 | assert(total == 0); | |
2699 | ||
2700 | /* check normal bins */ | |
2701 | for (i = 1; i < NBINS; ++i) { | |
2702 | b = bin_at(av,i); | |
2703 | ||
2704 | /* binmap is accurate (except for bin 1 == unsorted_chunks) */ | |
2705 | if (i >= 2) { | |
2706 | binbit = get_binmap(av,i); | |
2707 | empty = last(b) == b; | |
2708 | if (!binbit) | |
2709 | assert(empty); | |
2710 | else if (!empty) | |
2711 | assert(binbit); | |
2712 | } | |
2713 | ||
2714 | for (p = last(b); p != b; p = p->bk) { | |
2715 | /* each chunk claims to be free */ | |
2716 | do_check_free_chunk(p); | |
2717 | size = chunksize(p); | |
2718 | total += size; | |
2719 | if (i >= 2) { | |
2720 | /* chunk belongs in bin */ | |
2721 | idx = bin_index(size); | |
2722 | assert(idx == i); | |
2723 | /* lists are sorted */ | |
2724 | assert(p->bk == b || | |
2725 | (unsigned long)chunksize(p->bk) >= (unsigned long)chunksize(p)); | |
2726 | } | |
2727 | /* chunk is followed by a legal chain of inuse chunks */ | |
2728 | for (q = next_chunk(p); | |
2729 | (q != av->top && inuse(q) && | |
2730 | (unsigned long)(chunksize(q)) >= MINSIZE); | |
2731 | q = next_chunk(q)) | |
2732 | do_check_inuse_chunk(q); | |
2733 | } | |
2734 | } | |
2735 | ||
2736 | /* top chunk is OK */ | |
2737 | check_chunk(av->top); | |
2738 | ||
2739 | /* sanity checks for statistics */ | |
2740 | ||
2741 | assert(total <= (unsigned long)(av->max_total_mem)); | |
2742 | assert(av->n_mmaps >= 0); | |
2743 | assert(av->n_mmaps <= av->n_mmaps_max); | |
2744 | assert(av->n_mmaps <= av->max_n_mmaps); | |
2745 | ||
2746 | assert((unsigned long)(av->sbrked_mem) <= | |
2747 | (unsigned long)(av->max_sbrked_mem)); | |
2748 | ||
2749 | assert((unsigned long)(av->mmapped_mem) <= | |
2750 | (unsigned long)(av->max_mmapped_mem)); | |
2751 | ||
2752 | assert((unsigned long)(av->max_total_mem) >= | |
2753 | (unsigned long)(av->mmapped_mem) + (unsigned long)(av->sbrked_mem)); | |
2754 | } | |
2755 | #endif | |
2756 | ||
2757 | ||
2758 | /* ----------- Routines dealing with system allocation -------------- */ | |
2759 | ||
2760 | /* | |
2761 | sysmalloc handles malloc cases requiring more memory from the system. | |
2762 | On entry, it is assumed that av->top does not have enough | |
2763 | space to service request for nb bytes, thus requiring that av->top | |
2764 | be extended or replaced. | |
2765 | */ | |
2766 | ||
2767 | #if __STD_C | |
2768 | static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av) | |
2769 | #else | |
2770 | static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av; | |
2771 | #endif | |
2772 | { | |
2773 | mchunkptr old_top; /* incoming value of av->top */ | |
2774 | INTERNAL_SIZE_T old_size; /* its size */ | |
2775 | char* old_end; /* its end address */ | |
2776 | ||
2777 | long size; /* arg to first MORECORE or mmap call */ | |
2778 | char* brk; /* return value from MORECORE */ | |
2779 | ||
2780 | long correction; /* arg to 2nd MORECORE call */ | |
2781 | char* snd_brk; /* 2nd return val */ | |
2782 | ||
2783 | INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */ | |
2784 | INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */ | |
2785 | char* aligned_brk; /* aligned offset into brk */ | |
2786 | ||
2787 | mchunkptr p; /* the allocated/returned chunk */ | |
2788 | mchunkptr remainder; /* remainder from allocation */ | |
2789 | unsigned long remainder_size; /* its size */ | |
2790 | ||
2791 | unsigned long sum; /* for updating stats */ | |
2792 | ||
2793 | size_t pagemask = av->pagesize - 1; | |
2794 | ||
2795 | ||
2796 | #if HAVE_MMAP | |
2797 | ||
2798 | /* | |
2799 | If have mmap, and the request size meets the mmap threshold, and | |
2800 | the system supports mmap, and there are few enough currently | |
2801 | allocated mmapped regions, try to directly map this request | |
2802 | rather than expanding top. | |
2803 | */ | |
2804 | ||
2805 | if ((unsigned long)(nb) >= (unsigned long)(av->mmap_threshold) && | |
2806 | (av->n_mmaps < av->n_mmaps_max)) { | |
2807 | ||
2808 | char* mm; /* return value from mmap call*/ | |
2809 | ||
2810 | /* | |
2811 | Round up size to nearest page. For mmapped chunks, the overhead | |
2812 | is one SIZE_SZ unit larger than for normal chunks, because there | |
2813 | is no following chunk whose prev_size field could be used. | |
2814 | */ | |
2815 | size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask; | |
2816 | ||
2817 | /* Don't try if size wraps around 0 */ | |
2818 | if ((unsigned long)(size) > (unsigned long)(nb)) { | |
2819 | ||
2820 | mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE)); | |
2821 | ||
2822 | if (mm != (char*)(MORECORE_FAILURE)) { | |
2823 | ||
2824 | /* | |
2825 | The offset to the start of the mmapped region is stored | |
2826 | in the prev_size field of the chunk. This allows us to adjust | |
2827 | returned start address to meet alignment requirements here | |
2828 | and in memalign(), and still be able to compute proper | |
2829 | address argument for later munmap in free() and realloc(). | |
2830 | */ | |
2831 | ||
2832 | front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK; | |
2833 | if (front_misalign > 0) { | |
2834 | correction = MALLOC_ALIGNMENT - front_misalign; | |
2835 | p = (mchunkptr)(mm + correction); | |
2836 | p->prev_size = correction; | |
2837 | set_head(p, (size - correction) |IS_MMAPPED); | |
2838 | } | |
2839 | else { | |
2840 | p = (mchunkptr)mm; | |
2841 | set_head(p, size|IS_MMAPPED); | |
2842 | } | |
2843 | ||
2844 | /* update statistics */ | |
2845 | ||
2846 | if (++av->n_mmaps > av->max_n_mmaps) | |
2847 | av->max_n_mmaps = av->n_mmaps; | |
2848 | ||
2849 | sum = av->mmapped_mem += size; | |
2850 | if (sum > (unsigned long)(av->max_mmapped_mem)) | |
2851 | av->max_mmapped_mem = sum; | |
2852 | sum += av->sbrked_mem; | |
2853 | if (sum > (unsigned long)(av->max_total_mem)) | |
2854 | av->max_total_mem = sum; | |
2855 | ||
2856 | check_chunk(p); | |
2857 | ||
2858 | return chunk2mem(p); | |
2859 | } | |
2860 | } | |
2861 | } | |
2862 | #endif | |
2863 | ||
2864 | /* Record incoming configuration of top */ | |
2865 | ||
2866 | old_top = av->top; | |
2867 | old_size = chunksize(old_top); | |
2868 | old_end = (char*)(chunk_at_offset(old_top, old_size)); | |
2869 | ||
2870 | brk = snd_brk = (char*)(MORECORE_FAILURE); | |
2871 | ||
2872 | /* | |
2873 | If not the first time through, we require old_size to be | |
2874 | at least MINSIZE and to have prev_inuse set. | |
2875 | */ | |
2876 | ||
2877 | assert((old_top == initial_top(av) && old_size == 0) || | |
2878 | ((unsigned long) (old_size) >= MINSIZE && | |
2879 | prev_inuse(old_top))); | |
2880 | ||
2881 | /* Precondition: not enough current space to satisfy nb request */ | |
2882 | assert((unsigned long)(old_size) < (unsigned long)(nb + MINSIZE)); | |
2883 | ||
2884 | /* Precondition: all fastbins are consolidated */ | |
2885 | assert(!have_fastchunks(av)); | |
2886 | ||
2887 | ||
2888 | /* Request enough space for nb + pad + overhead */ | |
2889 | ||
2890 | size = nb + av->top_pad + MINSIZE; | |
2891 | ||
2892 | /* | |
2893 | If contiguous, we can subtract out existing space that we hope to | |
2894 | combine with new space. We add it back later only if | |
2895 | we don't actually get contiguous space. | |
2896 | */ | |
2897 | ||
2898 | if (contiguous(av)) | |
2899 | size -= old_size; | |
2900 | ||
2901 | /* | |
2902 | Round to a multiple of page size. | |
2903 | If MORECORE is not contiguous, this ensures that we only call it | |
2904 | with whole-page arguments. And if MORECORE is contiguous and | |
2905 | this is not first time through, this preserves page-alignment of | |
2906 | previous calls. Otherwise, we correct to page-align below. | |
2907 | */ | |
2908 | ||
2909 | size = (size + pagemask) & ~pagemask; | |
2910 | ||
2911 | /* | |
2912 | Don't try to call MORECORE if argument is so big as to appear | |
2913 | negative. Note that since mmap takes size_t arg, it may succeed | |
2914 | below even if we cannot call MORECORE. | |
2915 | */ | |
2916 | ||
2917 | if (size > 0) | |
2918 | brk = (char*)(MORECORE(size)); | |
2919 | ||
2920 | /* | |
2921 | If have mmap, try using it as a backup when MORECORE fails or | |
2922 | cannot be used. This is worth doing on systems that have "holes" in | |
2923 | address space, so sbrk cannot extend to give contiguous space, but | |
2924 | space is available elsewhere. Note that we ignore mmap max count | |
2925 | and threshold limits, since the space will not be used as a | |
2926 | segregated mmap region. | |
2927 | */ | |
2928 | ||
2929 | #if HAVE_MMAP | |
2930 | if (brk == (char*)(MORECORE_FAILURE)) { | |
2931 | ||
2932 | /* Cannot merge with old top, so add its size back in */ | |
2933 | if (contiguous(av)) | |
2934 | size = (size + old_size + pagemask) & ~pagemask; | |
2935 | ||
2936 | /* If we are relying on mmap as backup, then use larger units */ | |
2937 | if ((unsigned long)(size) < (unsigned long)(MMAP_AS_MORECORE_SIZE)) | |
2938 | size = MMAP_AS_MORECORE_SIZE; | |
2939 | ||
2940 | /* Don't try if size wraps around 0 */ | |
2941 | if ((unsigned long)(size) > (unsigned long)(nb)) { | |
2942 | ||
2943 | brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE)); | |
2944 | ||
2945 | if (brk != (char*)(MORECORE_FAILURE)) { | |
2946 | ||
2947 | /* We do not need, and cannot use, another sbrk call to find end */ | |
2948 | snd_brk = brk + size; | |
2949 | ||
2950 | /* | |
2951 | Record that we no longer have a contiguous sbrk region. | |
2952 | After the first time mmap is used as backup, we do not | |
2953 | ever rely on contiguous space since this could incorrectly | |
2954 | bridge regions. | |
2955 | */ | |
2956 | set_noncontiguous(av); | |
2957 | } | |
2958 | } | |
2959 | } | |
2960 | #endif | |
2961 | ||
2962 | if (brk != (char*)(MORECORE_FAILURE)) { | |
2963 | av->sbrked_mem += size; | |
2964 | ||
2965 | /* | |
2966 | If MORECORE extends previous space, we can likewise extend top size. | |
2967 | */ | |
2968 | ||
2969 | if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) { | |
2970 | set_head(old_top, (size + old_size) | PREV_INUSE); | |
2971 | } | |
2972 | ||
2973 | /* | |
2974 | Otherwise, make adjustments: | |
2975 | ||
2976 | * If the first time through or noncontiguous, we need to call sbrk | |
2977 | just to find out where the end of memory lies. | |
2978 | ||
2979 | * We need to ensure that all returned chunks from malloc will meet | |
2980 | MALLOC_ALIGNMENT | |
2981 | ||
2982 | * If there was an intervening foreign sbrk, we need to adjust sbrk | |
2983 | request size to account for fact that we will not be able to | |
2984 | combine new space with existing space in old_top. | |
2985 | ||
2986 | * Almost all systems internally allocate whole pages at a time, in | |
2987 | which case we might as well use the whole last page of request. | |
2988 | So we allocate enough more memory to hit a page boundary now, | |
2989 | which in turn causes future contiguous calls to page-align. | |
2990 | */ | |
2991 | ||
2992 | else { | |
2993 | front_misalign = 0; | |
2994 | end_misalign = 0; | |
2995 | correction = 0; | |
2996 | aligned_brk = brk; | |
2997 | ||
2998 | /* handle contiguous cases */ | |
2999 | if (contiguous(av)) { | |
3000 | ||
3001 | /* Guarantee alignment of first new chunk made from this space */ | |
3002 | ||
3003 | front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK; | |
3004 | if (front_misalign > 0) { | |
3005 | ||
3006 | /* | |
3007 | Skip over some bytes to arrive at an aligned position. | |
3008 | We don't need to specially mark these wasted front bytes. | |
3009 | They will never be accessed anyway because | |
3010 | prev_inuse of av->top (and any chunk created from its start) | |
3011 | is always true after initialization. | |
3012 | */ | |
3013 | ||
3014 | correction = MALLOC_ALIGNMENT - front_misalign; | |
3015 | aligned_brk += correction; | |
3016 | } | |
3017 | ||
3018 | /* | |
3019 | If this isn't adjacent to existing space, then we will not | |
3020 | be able to merge with old_top space, so must add to 2nd request. | |
3021 | */ | |
3022 | ||
3023 | correction += old_size; | |
3024 | ||
3025 | /* Extend the end address to hit a page boundary */ | |
3026 | end_misalign = (INTERNAL_SIZE_T)(brk + size + correction); | |
3027 | correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign; | |
3028 | ||
3029 | assert(correction >= 0); | |
3030 | snd_brk = (char*)(MORECORE(correction)); | |
3031 | ||
3032 | /* | |
3033 | If can't allocate correction, try to at least find out current | |
3034 | brk. It might be enough to proceed without failing. | |
3035 | ||
3036 | Note that if second sbrk did NOT fail, we assume that space | |
3037 | is contiguous with first sbrk. This is a safe assumption unless | |
3038 | program is multithreaded but doesn't use locks and a foreign sbrk | |
3039 | occurred between our first and second calls. | |
3040 | */ | |
3041 | ||
3042 | if (snd_brk == (char*)(MORECORE_FAILURE)) { | |
3043 | correction = 0; | |
3044 | snd_brk = (char*)(MORECORE(0)); | |
3045 | } | |
3046 | } | |
3047 | ||
3048 | /* handle non-contiguous cases */ | |
3049 | else { | |
3050 | /* MORECORE/mmap must correctly align */ | |
3051 | assert(((unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK) == 0); | |
3052 | ||
3053 | /* Find out current end of memory */ | |
3054 | if (snd_brk == (char*)(MORECORE_FAILURE)) { | |
3055 | snd_brk = (char*)(MORECORE(0)); | |
3056 | } | |
3057 | } | |
3058 | ||
3059 | /* Adjust top based on results of second sbrk */ | |
3060 | if (snd_brk != (char*)(MORECORE_FAILURE)) { | |
3061 | av->top = (mchunkptr)aligned_brk; | |
3062 | set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE); | |
3063 | av->sbrked_mem += correction; | |
3064 | ||
3065 | /* | |
3066 | If not the first time through, we either have a | |
3067 | gap due to foreign sbrk or a non-contiguous region. Insert a | |
3068 | double fencepost at old_top to prevent consolidation with space | |
3069 | we don't own. These fenceposts are artificial chunks that are | |
3070 | marked as inuse and are in any case too small to use. We need | |
3071 | two to make sizes and alignments work out. | |
3072 | */ | |
3073 | ||
3074 | if (old_size != 0) { | |
3075 | /* | |
3076 | Shrink old_top to insert fenceposts, keeping size a | |
3077 | multiple of MALLOC_ALIGNMENT. We know there is at least | |
3078 | enough space in old_top to do this. | |
3079 | */ | |
3080 | old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK; | |
3081 | set_head(old_top, old_size | PREV_INUSE); | |
3082 | ||
3083 | /* | |
3084 | Note that the following assignments completely overwrite | |
3085 | old_top when old_size was previously MINSIZE. This is | |
3086 | intentional. We need the fencepost, even if old_top otherwise gets | |
3087 | lost. | |
3088 | */ | |
3089 | chunk_at_offset(old_top, old_size )->size = | |
3090 | SIZE_SZ|PREV_INUSE; | |
3091 | ||
3092 | chunk_at_offset(old_top, old_size + SIZE_SZ)->size = | |
3093 | SIZE_SZ|PREV_INUSE; | |
3094 | ||
3095 | /* If possible, release the rest. */ | |
3096 | if (old_size >= MINSIZE) { | |
3097 | fREe(chunk2mem(old_top)); | |
3098 | } | |
3099 | ||
3100 | } | |
3101 | } | |
3102 | } | |
3103 | ||
3104 | /* Update statistics */ | |
3105 | sum = av->sbrked_mem; | |
3106 | if (sum > (unsigned long)(av->max_sbrked_mem)) | |
3107 | av->max_sbrked_mem = sum; | |
3108 | ||
3109 | sum += av->mmapped_mem; | |
3110 | if (sum > (unsigned long)(av->max_total_mem)) | |
3111 | av->max_total_mem = sum; | |
3112 | ||
3113 | check_malloc_state(); | |
3114 | ||
3115 | /* finally, do the allocation */ | |
3116 | p = av->top; | |
3117 | size = chunksize(p); | |
3118 | ||
3119 | /* check that one of the above allocation paths succeeded */ | |
3120 | if ((unsigned long)(size) >= (unsigned long)(nb + MINSIZE)) { | |
3121 | remainder_size = size - nb; | |
3122 | remainder = chunk_at_offset(p, nb); | |
3123 | av->top = remainder; | |
3124 | set_head(p, nb | PREV_INUSE); | |
3125 | set_head(remainder, remainder_size | PREV_INUSE); | |
3126 | check_malloced_chunk(p, nb); | |
3127 | return chunk2mem(p); | |
3128 | } | |
3129 | } | |
3130 | ||
3131 | /* catch all failure paths */ | |
3132 | MALLOC_FAILURE_ACTION; | |
3133 | return 0; | |
3134 | } | |
3135 | ||
3136 | ||
3137 | /* | |
3138 | sYSTRIm is an inverse of sorts to sYSMALLOc. It gives memory back | |
3139 | to the system (via negative arguments to sbrk) if there is unused | |
3140 | memory at the `high' end of the malloc pool. It is called | |
3141 | automatically by free() when top space exceeds the trim | |
3142 | threshold. It is also called by the public malloc_trim routine. It | |
3143 | returns 1 if it actually released any memory, else 0. | |
3144 | */ | |
3145 | ||
3146 | #if __STD_C | |
3147 | static int sYSTRIm(size_t pad, mstate av) | |
3148 | #else | |
3149 | static int sYSTRIm(pad, av) size_t pad; mstate av; | |
3150 | #endif | |
3151 | { | |
3152 | long top_size; /* Amount of top-most memory */ | |
3153 | long extra; /* Amount to release */ | |
3154 | long released; /* Amount actually released */ | |
3155 | char* current_brk; /* address returned by pre-check sbrk call */ | |
3156 | char* new_brk; /* address returned by post-check sbrk call */ | |
3157 | size_t pagesz; | |
3158 | ||
3159 | pagesz = av->pagesize; | |
3160 | top_size = chunksize(av->top); | |
3161 | ||
3162 | /* Release in pagesize units, keeping at least one page */ | |
3163 | extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz; | |
3164 | ||
3165 | if (extra > 0) { | |
3166 | ||
3167 | /* | |
3168 | Only proceed if end of memory is where we last set it. | |
3169 | This avoids problems if there were foreign sbrk calls. | |
3170 | */ | |
3171 | current_brk = (char*)(MORECORE(0)); | |
3172 | if (current_brk == (char*)(av->top) + top_size) { | |
3173 | ||
3174 | /* | |
3175 | Attempt to release memory. We ignore MORECORE return value, | |
3176 | and instead call again to find out where new end of memory is. | |
3177 | This avoids problems if first call releases less than we asked, | |
3178 | of if failure somehow altered brk value. (We could still | |
3179 | encounter problems if it altered brk in some very bad way, | |
3180 | but the only thing we can do is adjust anyway, which will cause | |
3181 | some downstream failure.) | |
3182 | */ | |
3183 | ||
3184 | MORECORE(-extra); | |
3185 | new_brk = (char*)(MORECORE(0)); | |
3186 | ||
3187 | if (new_brk != (char*)MORECORE_FAILURE) { | |
3188 | released = (long)(current_brk - new_brk); | |
3189 | ||
3190 | if (released != 0) { | |
3191 | /* Success. Adjust top. */ | |
3192 | av->sbrked_mem -= released; | |
3193 | set_head(av->top, (top_size - released) | PREV_INUSE); | |
3194 | check_malloc_state(); | |
3195 | return 1; | |
3196 | } | |
3197 | } | |
3198 | } | |
3199 | } | |
3200 | return 0; | |
3201 | } | |
3202 | ||
3203 | /* | |
3204 | ------------------------------ malloc ------------------------------ | |
3205 | */ | |
3206 | ||
3207 | #if __STD_C | |
3208 | Void_t* mALLOc(size_t bytes) | |
3209 | #else | |
3210 | Void_t* mALLOc(bytes) size_t bytes; | |
3211 | #endif | |
3212 | { | |
3213 | mstate av = get_malloc_state(); | |
3214 | ||
3215 | INTERNAL_SIZE_T nb; /* normalized request size */ | |
3216 | unsigned int idx; /* associated bin index */ | |
3217 | mbinptr bin; /* associated bin */ | |
3218 | mfastbinptr* fb; /* associated fastbin */ | |
3219 | ||
3220 | mchunkptr victim; /* inspected/selected chunk */ | |
3221 | INTERNAL_SIZE_T size; /* its size */ | |
3222 | int victim_index; /* its bin index */ | |
3223 | ||
3224 | mchunkptr remainder; /* remainder from a split */ | |
3225 | unsigned long remainder_size; /* its size */ | |
3226 | ||
3227 | unsigned int block; /* bit map traverser */ | |
3228 | unsigned int bit; /* bit map traverser */ | |
3229 | unsigned int map; /* current word of binmap */ | |
3230 | ||
3231 | mchunkptr fwd; /* misc temp for linking */ | |
3232 | mchunkptr bck; /* misc temp for linking */ | |
3233 | ||
3234 | /* | |
3235 | Convert request size to internal form by adding SIZE_SZ bytes | |
3236 | overhead plus possibly more to obtain necessary alignment and/or | |
3237 | to obtain a size of at least MINSIZE, the smallest allocatable | |
3238 | size. Also, checked_request2size traps (returning 0) request sizes | |
3239 | that are so large that they wrap around zero when padded and | |
3240 | aligned. | |
3241 | */ | |
3242 | ||
3243 | checked_request2size(bytes, nb); | |
3244 | ||
3245 | /* | |
3246 | If the size qualifies as a fastbin, first check corresponding bin. | |
3247 | This code is safe to execute even if av is not yet initialized, so we | |
3248 | can try it without checking, which saves some time on this fast path. | |
3249 | */ | |
3250 | ||
3251 | if ((unsigned long)(nb) <= (unsigned long)(av->max_fast)) { | |
3252 | fb = &(av->fastbins[(fastbin_index(nb))]); | |
3253 | if ( (victim = *fb) != 0) { | |
3254 | *fb = victim->fd; | |
3255 | check_remalloced_chunk(victim, nb); | |
3256 | return chunk2mem(victim); | |
3257 | } | |
3258 | } | |
3259 | ||
3260 | /* | |
3261 | If a small request, check regular bin. Since these "smallbins" | |
3262 | hold one size each, no searching within bins is necessary. | |
3263 | (For a large request, we need to wait until unsorted chunks are | |
3264 | processed to find best fit. But for small ones, fits are exact | |
3265 | anyway, so we can check now, which is faster.) | |
3266 | */ | |
3267 | ||
3268 | if (in_smallbin_range(nb)) { | |
3269 | idx = smallbin_index(nb); | |
3270 | bin = bin_at(av,idx); | |
3271 | ||
3272 | if ( (victim = last(bin)) != bin) { | |
3273 | if (victim == 0) /* initialization check */ | |
3274 | malloc_consolidate(av); | |
3275 | else { | |
3276 | bck = victim->bk; | |
3277 | set_inuse_bit_at_offset(victim, nb); | |
3278 | bin->bk = bck; | |
3279 | bck->fd = bin; | |
3280 | ||
3281 | check_malloced_chunk(victim, nb); | |
3282 | return chunk2mem(victim); | |
3283 | } | |
3284 | } | |
3285 | } | |
3286 | ||
3287 | /* | |
3288 | If this is a large request, consolidate fastbins before continuing. | |
3289 | While it might look excessive to kill all fastbins before | |
3290 | even seeing if there is space available, this avoids | |
3291 | fragmentation problems normally associated with fastbins. | |
3292 | Also, in practice, programs tend to have runs of either small or | |
3293 | large requests, but less often mixtures, so consolidation is not | |
3294 | invoked all that often in most programs. And the programs that | |
3295 | it is called frequently in otherwise tend to fragment. | |
3296 | */ | |
3297 | ||
3298 | else { | |
3299 | idx = largebin_index(nb); | |
3300 | if (have_fastchunks(av)) | |
3301 | malloc_consolidate(av); | |
3302 | } | |
3303 | ||
3304 | /* | |
3305 | Process recently freed or remaindered chunks, taking one only if | |
3306 | it is exact fit, or, if this a small request, the chunk is remainder from | |
3307 | the most recent non-exact fit. Place other traversed chunks in | |
3308 | bins. Note that this step is the only place in any routine where | |
3309 | chunks are placed in bins. | |
3310 | ||
3311 | The outer loop here is needed because we might not realize until | |
3312 | near the end of malloc that we should have consolidated, so must | |
3313 | do so and retry. This happens at most once, and only when we would | |
3314 | otherwise need to expand memory to service a "small" request. | |
3315 | */ | |
3316 | ||
3317 | for(;;) { | |
3318 | ||
3319 | while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) { | |
3320 | bck = victim->bk; | |
3321 | size = chunksize(victim); | |
3322 | ||
3323 | /* | |
3324 | If a small request, try to use last remainder if it is the | |
3325 | only chunk in unsorted bin. This helps promote locality for | |
3326 | runs of consecutive small requests. This is the only | |
3327 | exception to best-fit, and applies only when there is | |
3328 | no exact fit for a small chunk. | |
3329 | */ | |
3330 | ||
3331 | if (in_smallbin_range(nb) && | |
3332 | bck == unsorted_chunks(av) && | |
3333 | victim == av->last_remainder && | |
3334 | (unsigned long)(size) > (unsigned long)(nb + MINSIZE)) { | |
3335 | ||
3336 | /* split and reattach remainder */ | |
3337 | remainder_size = size - nb; | |
3338 | remainder = chunk_at_offset(victim, nb); | |
3339 | unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder; | |
3340 | av->last_remainder = remainder; | |
3341 | remainder->bk = remainder->fd = unsorted_chunks(av); | |
3342 | ||
3343 | set_head(victim, nb | PREV_INUSE); | |
3344 | set_head(remainder, remainder_size | PREV_INUSE); | |
3345 | set_foot(remainder, remainder_size); | |
3346 | ||
3347 | check_malloced_chunk(victim, nb); | |
3348 | return chunk2mem(victim); | |
3349 | } | |
3350 | ||
3351 | /* remove from unsorted list */ | |
3352 | unsorted_chunks(av)->bk = bck; | |
3353 | bck->fd = unsorted_chunks(av); | |
3354 | ||
3355 | /* Take now instead of binning if exact fit */ | |
3356 | ||
3357 | if (size == nb) { | |
3358 | set_inuse_bit_at_offset(victim, size); | |
3359 | check_malloced_chunk(victim, nb); | |
3360 | return chunk2mem(victim); | |
3361 | } | |
3362 | ||
3363 | /* place chunk in bin */ | |
3364 | ||
3365 | if (in_smallbin_range(size)) { | |
3366 | victim_index = smallbin_index(size); | |
3367 | bck = bin_at(av, victim_index); | |
3368 | fwd = bck->fd; | |
3369 | } | |
3370 | else { | |
3371 | victim_index = largebin_index(size); | |
3372 | bck = bin_at(av, victim_index); | |
3373 | fwd = bck->fd; | |
3374 | ||
3375 | /* maintain large bins in sorted order */ | |
3376 | if (fwd != bck) { | |
3377 | size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */ | |
3378 | /* if smaller than smallest, bypass loop below */ | |
3379 | if ((unsigned long)(size) <= (unsigned long)(bck->bk->size)) { | |
3380 | fwd = bck; | |
3381 | bck = bck->bk; | |
3382 | } | |
3383 | else { | |
3384 | while ((unsigned long)(size) < (unsigned long)(fwd->size)) | |
3385 | fwd = fwd->fd; | |
3386 | bck = fwd->bk; | |
3387 | } | |
3388 | } | |
3389 | } | |
3390 | ||
3391 | mark_bin(av, victim_index); | |
3392 | victim->bk = bck; | |
3393 | victim->fd = fwd; | |
3394 | fwd->bk = victim; | |
3395 | bck->fd = victim; | |
3396 | } | |
3397 | ||
3398 | /* | |
3399 | If a large request, scan through the chunks of current bin in | |
3400 | sorted order to find smallest that fits. This is the only step | |
3401 | where an unbounded number of chunks might be scanned without doing | |
3402 | anything useful with them. However the lists tend to be short. | |
3403 | */ | |
3404 | ||
3405 | if (!in_smallbin_range(nb)) { | |
3406 | bin = bin_at(av, idx); | |
3407 | ||
3408 | /* skip scan if empty or largest chunk is too small */ | |
3409 | if ((victim = last(bin)) != bin && | |
3410 | (unsigned long)(first(bin)->size) >= (unsigned long)(nb)) { | |
3411 | ||
3412 | while (((unsigned long)(size = chunksize(victim)) < | |
3413 | (unsigned long)(nb))) | |
3414 | victim = victim->bk; | |
3415 | ||
3416 | remainder_size = size - nb; | |
3417 | unlink(victim, bck, fwd); | |
3418 | ||
3419 | /* Exhaust */ | |
3420 | if (remainder_size < MINSIZE) { | |
3421 | set_inuse_bit_at_offset(victim, size); | |
3422 | check_malloced_chunk(victim, nb); | |
3423 | return chunk2mem(victim); | |
3424 | } | |
3425 | /* Split */ | |
3426 | else { | |
3427 | remainder = chunk_at_offset(victim, nb); | |
3428 | unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder; | |
3429 | remainder->bk = remainder->fd = unsorted_chunks(av); | |
3430 | set_head(victim, nb | PREV_INUSE); | |
3431 | set_head(remainder, remainder_size | PREV_INUSE); | |
3432 | set_foot(remainder, remainder_size); | |
3433 | check_malloced_chunk(victim, nb); | |
3434 | return chunk2mem(victim); | |
3435 | } | |
3436 | } | |
3437 | } | |
3438 | ||
3439 | /* | |
3440 | Search for a chunk by scanning bins, starting with next largest | |
3441 | bin. This search is strictly by best-fit; i.e., the smallest | |
3442 | (with ties going to approximately the least recently used) chunk | |
3443 | that fits is selected. | |
3444 | ||
3445 | The bitmap avoids needing to check that most blocks are nonempty. | |
3446 | The particular case of skipping all bins during warm-up phases | |
3447 | when no chunks have been returned yet is faster than it might look. | |
3448 | */ | |
3449 | ||
3450 | ++idx; | |
3451 | bin = bin_at(av,idx); | |
3452 | block = idx2block(idx); | |
3453 | map = av->binmap[block]; | |
3454 | bit = idx2bit(idx); | |
3455 | ||
3456 | for (;;) { | |
3457 | ||
3458 | /* Skip rest of block if there are no more set bits in this block. */ | |
3459 | if (bit > map || bit == 0) { | |
3460 | do { | |
3461 | if (++block >= BINMAPSIZE) /* out of bins */ | |
3462 | goto use_top; | |
3463 | } while ( (map = av->binmap[block]) == 0); | |
3464 | ||
3465 | bin = bin_at(av, (block << BINMAPSHIFT)); | |
3466 | bit = 1; | |
3467 | } | |
3468 | ||
3469 | /* Advance to bin with set bit. There must be one. */ | |
3470 | while ((bit & map) == 0) { | |
3471 | bin = next_bin(bin); | |
3472 | bit <<= 1; | |
3473 | assert(bit != 0); | |
3474 | } | |
3475 | ||
3476 | /* Inspect the bin. It is likely to be non-empty */ | |
3477 | victim = last(bin); | |
3478 | ||
3479 | /* If a false alarm (empty bin), clear the bit. */ | |
3480 | if (victim == bin) { | |
3481 | av->binmap[block] = map &= ~bit; /* Write through */ | |
3482 | bin = next_bin(bin); | |
3483 | bit <<= 1; | |
3484 | } | |
3485 | ||
3486 | else { | |
3487 | size = chunksize(victim); | |
3488 | ||
3489 | /* We know the first chunk in this bin is big enough to use. */ | |
3490 | assert((unsigned long)(size) >= (unsigned long)(nb)); | |
3491 | ||
3492 | remainder_size = size - nb; | |
3493 | ||
3494 | /* unlink */ | |
3495 | bck = victim->bk; | |
3496 | bin->bk = bck; | |
3497 | bck->fd = bin; | |
3498 | ||
3499 | /* Exhaust */ | |
3500 | if (remainder_size < MINSIZE) { | |
3501 | set_inuse_bit_at_offset(victim, size); | |
3502 | check_malloced_chunk(victim, nb); | |
3503 | return chunk2mem(victim); | |
3504 | } | |
3505 | ||
3506 | /* Split */ | |
3507 | else { | |
3508 | remainder = chunk_at_offset(victim, nb); | |
3509 | ||
3510 | unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder; | |
3511 | remainder->bk = remainder->fd = unsorted_chunks(av); | |
3512 | /* advertise as last remainder */ | |
3513 | if (in_smallbin_range(nb)) | |
3514 | av->last_remainder = remainder; | |
3515 | ||
3516 | set_head(victim, nb | PREV_INUSE); | |
3517 | set_head(remainder, remainder_size | PREV_INUSE); | |
3518 | set_foot(remainder, remainder_size); | |
3519 | check_malloced_chunk(victim, nb); | |
3520 | return chunk2mem(victim); | |
3521 | } | |
3522 | } | |
3523 | } | |
3524 | ||
3525 | use_top: | |
3526 | /* | |
3527 | If large enough, split off the chunk bordering the end of memory | |
3528 | (held in av->top). Note that this is in accord with the best-fit | |
3529 | search rule. In effect, av->top is treated as larger (and thus | |
3530 | less well fitting) than any other available chunk since it can | |
3531 | be extended to be as large as necessary (up to system | |
3532 | limitations). | |
3533 | ||
3534 | We require that av->top always exists (i.e., has size >= | |
3535 | MINSIZE) after initialization, so if it would otherwise be | |
3536 | exhuasted by current request, it is replenished. (The main | |
3537 | reason for ensuring it exists is that we may need MINSIZE space | |
3538 | to put in fenceposts in sysmalloc.) | |
3539 | */ | |
3540 | ||
3541 | victim = av->top; | |
3542 | size = chunksize(victim); | |
3543 | ||
3544 | if ((unsigned long)(size) >= (unsigned long)(nb + MINSIZE)) { | |
3545 | remainder_size = size - nb; | |
3546 | remainder = chunk_at_offset(victim, nb); | |
3547 | av->top = remainder; | |
3548 | set_head(victim, nb | PREV_INUSE); | |
3549 | set_head(remainder, remainder_size | PREV_INUSE); | |
3550 | ||
3551 | check_malloced_chunk(victim, nb); | |
3552 | return chunk2mem(victim); | |
3553 | } | |
3554 | ||
3555 | /* | |
3556 | If there is space available in fastbins, consolidate and retry, | |
3557 | to possibly avoid expanding memory. This can occur only if nb is | |
3558 | in smallbin range so we didn't consolidate upon entry. | |
3559 | */ | |
3560 | ||
3561 | else if (have_fastchunks(av)) { | |
3562 | assert(in_smallbin_range(nb)); | |
3563 | malloc_consolidate(av); | |
3564 | idx = smallbin_index(nb); /* restore original bin index */ | |
3565 | } | |
3566 | ||
3567 | /* | |
3568 | Otherwise, relay to handle system-dependent cases | |
3569 | */ | |
3570 | else | |
3571 | return sYSMALLOc(nb, av); | |
3572 | } | |
3573 | } | |
3574 | ||
3575 | /* | |
3576 | ------------------------------ free ------------------------------ | |
3577 | */ | |
3578 | ||
3579 | #if __STD_C | |
3580 | void fREe(Void_t* mem) | |
3581 | #else | |
3582 | void fREe(mem) Void_t* mem; | |
3583 | #endif | |
3584 | { | |
3585 | mstate av = get_malloc_state(); | |
3586 | ||
3587 | mchunkptr p; /* chunk corresponding to mem */ | |
3588 | INTERNAL_SIZE_T size; /* its size */ | |
3589 | mfastbinptr* fb; /* associated fastbin */ | |
3590 | mchunkptr nextchunk; /* next contiguous chunk */ | |
3591 | INTERNAL_SIZE_T nextsize; /* its size */ | |
3592 | int nextinuse; /* true if nextchunk is used */ | |
3593 | INTERNAL_SIZE_T prevsize; /* size of previous contiguous chunk */ | |
3594 | mchunkptr bck; /* misc temp for linking */ | |
3595 | mchunkptr fwd; /* misc temp for linking */ | |
3596 | ||
3597 | ||
3598 | /* free(0) has no effect */ | |
3599 | if (mem != 0) { | |
3600 | p = mem2chunk(mem); | |
3601 | size = chunksize(p); | |
3602 | ||
3603 | check_inuse_chunk(p); | |
3604 | ||
3605 | /* | |
3606 | If eligible, place chunk on a fastbin so it can be found | |
3607 | and used quickly in malloc. | |
3608 | */ | |
3609 | ||
3610 | if ((unsigned long)(size) <= (unsigned long)(av->max_fast) | |
3611 | ||
3612 | #if TRIM_FASTBINS | |
3613 | /* | |
3614 | If TRIM_FASTBINS set, don't place chunks | |
3615 | bordering top into fastbins | |
3616 | */ | |
3617 | && (chunk_at_offset(p, size) != av->top) | |
3618 | #endif | |
3619 | ) { | |
3620 | ||
3621 | set_fastchunks(av); | |
3622 | fb = &(av->fastbins[fastbin_index(size)]); | |
3623 | p->fd = *fb; | |
3624 | *fb = p; | |
3625 | } | |
3626 | ||
3627 | /* | |
3628 | Consolidate other non-mmapped chunks as they arrive. | |
3629 | */ | |
3630 | ||
3631 | else if (!chunk_is_mmapped(p)) { | |
3632 | nextchunk = chunk_at_offset(p, size); | |
3633 | nextsize = chunksize(nextchunk); | |
3634 | ||
3635 | /* consolidate backward */ | |
3636 | if (!prev_inuse(p)) { | |
3637 | prevsize = p->prev_size; | |
3638 | size += prevsize; | |
3639 | p = chunk_at_offset(p, -((long) prevsize)); | |
3640 | unlink(p, bck, fwd); | |
3641 | } | |
3642 | ||
3643 | if (nextchunk != av->top) { | |
3644 | /* get and clear inuse bit */ | |
3645 | nextinuse = inuse_bit_at_offset(nextchunk, nextsize); | |
3646 | set_head(nextchunk, nextsize); | |
3647 | ||
3648 | /* consolidate forward */ | |
3649 | if (!nextinuse) { | |
3650 | unlink(nextchunk, bck, fwd); | |
3651 | size += nextsize; | |
3652 | } | |
3653 | ||
3654 | /* | |
3655 | Place the chunk in unsorted chunk list. Chunks are | |
3656 | not placed into regular bins until after they have | |
3657 | been given one chance to be used in malloc. | |
3658 | */ | |
3659 | ||
3660 | bck = unsorted_chunks(av); | |
3661 | fwd = bck->fd; | |
3662 | p->bk = bck; | |
3663 | p->fd = fwd; | |
3664 | bck->fd = p; | |
3665 | fwd->bk = p; | |
3666 | ||
3667 | set_head(p, size | PREV_INUSE); | |
3668 | set_foot(p, size); | |
3669 | ||
3670 | check_free_chunk(p); | |
3671 | } | |
3672 | ||
3673 | /* | |
3674 | If the chunk borders the current high end of memory, | |
3675 | consolidate into top | |
3676 | */ | |
3677 | ||
3678 | else { | |
3679 | size += nextsize; | |
3680 | set_head(p, size | PREV_INUSE); | |
3681 | av->top = p; | |
3682 | check_chunk(p); | |
3683 | } | |
3684 | ||
3685 | /* | |
3686 | If freeing a large space, consolidate possibly-surrounding | |
3687 | chunks. Then, if the total unused topmost memory exceeds trim | |
3688 | threshold, ask malloc_trim to reduce top. | |
3689 | ||
3690 | Unless max_fast is 0, we don't know if there are fastbins | |
3691 | bordering top, so we cannot tell for sure whether threshold | |
3692 | has been reached unless fastbins are consolidated. But we | |
3693 | don't want to consolidate on each free. As a compromise, | |
3694 | consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD | |
3695 | is reached. | |
3696 | */ | |
3697 | ||
3698 | if ((unsigned long)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) { | |
3699 | if (have_fastchunks(av)) | |
3700 | malloc_consolidate(av); | |
3701 | ||
3702 | #ifndef MORECORE_CANNOT_TRIM | |
3703 | if ((unsigned long)(chunksize(av->top)) >= | |
3704 | (unsigned long)(av->trim_threshold)) | |
3705 | sYSTRIm(av->top_pad, av); | |
3706 | #endif | |
3707 | } | |
3708 | ||
3709 | } | |
3710 | /* | |
3711 | If the chunk was allocated via mmap, release via munmap() | |
3712 | Note that if HAVE_MMAP is false but chunk_is_mmapped is | |
3713 | true, then user must have overwritten memory. There's nothing | |
3714 | we can do to catch this error unless DEBUG is set, in which case | |
3715 | check_inuse_chunk (above) will have triggered error. | |
3716 | */ | |
3717 | ||
3718 | else { | |
3719 | #if HAVE_MMAP | |
3720 | int ret; | |
3721 | INTERNAL_SIZE_T offset = p->prev_size; | |
3722 | av->n_mmaps--; | |
3723 | av->mmapped_mem -= (size + offset); | |
3724 | ret = munmap((char*)p - offset, size + offset); | |
3725 | /* munmap returns non-zero on failure */ | |
3726 | assert(ret == 0); | |
3727 | #endif | |
3728 | } | |
3729 | } | |
3730 | } | |
3731 | ||
3732 | /* | |
3733 | ------------------------- malloc_consolidate ------------------------- | |
3734 | ||
3735 | malloc_consolidate is a specialized version of free() that tears | |
3736 | down chunks held in fastbins. Free itself cannot be used for this | |
3737 | purpose since, among other things, it might place chunks back onto | |
3738 | fastbins. So, instead, we need to use a minor variant of the same | |
3739 | code. | |
3740 | ||
3741 | Also, because this routine needs to be called the first time through | |
3742 | malloc anyway, it turns out to be the perfect place to trigger | |
3743 | initialization code. | |
3744 | */ | |
3745 | ||
3746 | #if __STD_C | |
3747 | static void malloc_consolidate(mstate av) | |
3748 | #else | |
3749 | static void malloc_consolidate(av) mstate av; | |
3750 | #endif | |
3751 | { | |
3752 | mfastbinptr* fb; /* current fastbin being consolidated */ | |
3753 | mfastbinptr* maxfb; /* last fastbin (for loop control) */ | |
3754 | mchunkptr p; /* current chunk being consolidated */ | |
3755 | mchunkptr nextp; /* next chunk to consolidate */ | |
3756 | mchunkptr unsorted_bin; /* bin header */ | |
3757 | mchunkptr first_unsorted; /* chunk to link to */ | |
3758 | ||
3759 | /* These have same use as in free() */ | |
3760 | mchunkptr nextchunk; | |
3761 | INTERNAL_SIZE_T size; | |
3762 | INTERNAL_SIZE_T nextsize; | |
3763 | INTERNAL_SIZE_T prevsize; | |
3764 | int nextinuse; | |
3765 | mchunkptr bck; | |
3766 | mchunkptr fwd; | |
3767 | ||
3768 | /* | |
3769 | If max_fast is 0, we know that av hasn't | |
3770 | yet been initialized, in which case do so below | |
3771 | */ | |
3772 | ||
3773 | if (av->max_fast != 0) { | |
3774 | clear_fastchunks(av); | |
3775 | ||
3776 | unsorted_bin = unsorted_chunks(av); | |
3777 | ||
3778 | /* | |
3779 | Remove each chunk from fast bin and consolidate it, placing it | |
3780 | then in unsorted bin. Among other reasons for doing this, | |
3781 | placing in unsorted bin avoids needing to calculate actual bins | |
3782 | until malloc is sure that chunks aren't immediately going to be | |
3783 | reused anyway. | |
3784 | */ | |
3785 | ||
3786 | maxfb = &(av->fastbins[fastbin_index(av->max_fast)]); | |
3787 | fb = &(av->fastbins[0]); | |
3788 | do { | |
3789 | if ( (p = *fb) != 0) { | |
3790 | *fb = 0; | |
3791 | ||
3792 | do { | |
3793 | check_inuse_chunk(p); | |
3794 | nextp = p->fd; | |
3795 | ||
3796 | /* Slightly streamlined version of consolidation code in free() */ | |
3797 | size = p->size & ~PREV_INUSE; | |
3798 | nextchunk = chunk_at_offset(p, size); | |
3799 | nextsize = chunksize(nextchunk); | |
3800 | ||
3801 | if (!prev_inuse(p)) { | |
3802 | prevsize = p->prev_size; | |
3803 | size += prevsize; | |
3804 | p = chunk_at_offset(p, -((long) prevsize)); | |
3805 | unlink(p, bck, fwd); | |
3806 | } | |
3807 | ||
3808 | if (nextchunk != av->top) { | |
3809 | nextinuse = inuse_bit_at_offset(nextchunk, nextsize); | |
3810 | set_head(nextchunk, nextsize); | |
3811 | ||
3812 | if (!nextinuse) { | |
3813 | size += nextsize; | |
3814 | unlink(nextchunk, bck, fwd); | |
3815 | } | |
3816 | ||
3817 | first_unsorted = unsorted_bin->fd; | |
3818 | unsorted_bin->fd = p; | |
3819 | first_unsorted->bk = p; | |
3820 | ||
3821 | set_head(p, size | PREV_INUSE); | |
3822 | p->bk = unsorted_bin; | |
3823 | p->fd = first_unsorted; | |
3824 | set_foot(p, size); | |
3825 | } | |
3826 | ||
3827 | else { | |
3828 | size += nextsize; | |
3829 | set_head(p, size | PREV_INUSE); | |
3830 | av->top = p; | |
3831 | } | |
3832 | ||
3833 | } while ( (p = nextp) != 0); | |
3834 | ||
3835 | } | |
3836 | } while (fb++ != maxfb); | |
3837 | } | |
3838 | else { | |
3839 | malloc_init_state(av); | |
3840 | check_malloc_state(); | |
3841 | } | |
3842 | } | |
3843 | ||
3844 | /* | |
3845 | ------------------------------ realloc ------------------------------ | |
3846 | */ | |
3847 | ||
3848 | ||
3849 | #if __STD_C | |
3850 | Void_t* rEALLOc(Void_t* oldmem, size_t bytes) | |
3851 | #else | |
3852 | Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes; | |
3853 | #endif | |
3854 | { | |
3855 | mstate av = get_malloc_state(); | |
3856 | ||
3857 | INTERNAL_SIZE_T nb; /* padded request size */ | |
3858 | ||
3859 | mchunkptr oldp; /* chunk corresponding to oldmem */ | |
3860 | INTERNAL_SIZE_T oldsize; /* its size */ | |
3861 | ||
3862 | mchunkptr newp; /* chunk to return */ | |
3863 | INTERNAL_SIZE_T newsize; /* its size */ | |
3864 | Void_t* newmem; /* corresponding user mem */ | |
3865 | ||
3866 | mchunkptr next; /* next contiguous chunk after oldp */ | |
3867 | ||
3868 | mchunkptr remainder; /* extra space at end of newp */ | |
3869 | unsigned long remainder_size; /* its size */ | |
3870 | ||
3871 | mchunkptr bck; /* misc temp for linking */ | |
3872 | mchunkptr fwd; /* misc temp for linking */ | |
3873 | ||
3874 | unsigned long copysize; /* bytes to copy */ | |
3875 | unsigned int ncopies; /* INTERNAL_SIZE_T words to copy */ | |
3876 | INTERNAL_SIZE_T* s; /* copy source */ | |
3877 | INTERNAL_SIZE_T* d; /* copy destination */ | |
3878 | ||
3879 | ||
3880 | #ifdef REALLOC_ZERO_BYTES_FREES | |
3881 | if (bytes == 0) { | |
3882 | fREe(oldmem); | |
3883 | return 0; | |
3884 | } | |
3885 | #endif | |
3886 | ||
3887 | /* realloc of null is supposed to be same as malloc */ | |
3888 | if (oldmem == 0) return mALLOc(bytes); | |
3889 | ||
3890 | checked_request2size(bytes, nb); | |
3891 | ||
3892 | oldp = mem2chunk(oldmem); | |
3893 | oldsize = chunksize(oldp); | |
3894 | ||
3895 | check_inuse_chunk(oldp); | |
3896 | ||
3897 | if (!chunk_is_mmapped(oldp)) { | |
3898 | ||
3899 | if ((unsigned long)(oldsize) >= (unsigned long)(nb)) { | |
3900 | /* already big enough; split below */ | |
3901 | newp = oldp; | |
3902 | newsize = oldsize; | |
3903 | } | |
3904 | ||
3905 | else { | |
3906 | next = chunk_at_offset(oldp, oldsize); | |
3907 | ||
3908 | /* Try to expand forward into top */ | |
3909 | if (next == av->top && | |
3910 | (unsigned long)(newsize = oldsize + chunksize(next)) >= | |
3911 | (unsigned long)(nb + MINSIZE)) { | |
3912 | set_head_size(oldp, nb); | |
3913 | av->top = chunk_at_offset(oldp, nb); | |
3914 | set_head(av->top, (newsize - nb) | PREV_INUSE); | |
3915 | return chunk2mem(oldp); | |
3916 | } | |
3917 | ||
3918 | /* Try to expand forward into next chunk; split off remainder below */ | |
3919 | else if (next != av->top && | |
3920 | !inuse(next) && | |
3921 | (unsigned long)(newsize = oldsize + chunksize(next)) >= | |
3922 | (unsigned long)(nb)) { | |
3923 | newp = oldp; | |
3924 | unlink(next, bck, fwd); | |
3925 | } | |
3926 | ||
3927 | /* allocate, copy, free */ | |
3928 | else { | |
3929 | newmem = mALLOc(nb - MALLOC_ALIGN_MASK); | |
3930 | if (newmem == 0) | |
3931 | return 0; /* propagate failure */ | |
3932 | ||
3933 | newp = mem2chunk(newmem); | |
3934 | newsize = chunksize(newp); | |
3935 | ||
3936 | /* | |
3937 | Avoid copy if newp is next chunk after oldp. | |
3938 | */ | |
3939 | if (newp == next) { | |
3940 | newsize += oldsize; | |
3941 | newp = oldp; | |
3942 | } | |
3943 | else { | |
3944 | /* | |
3945 | Unroll copy of <= 36 bytes (72 if 8byte sizes) | |
3946 | We know that contents have an odd number of | |
3947 | INTERNAL_SIZE_T-sized words; minimally 3. | |
3948 | */ | |
3949 | ||
3950 | copysize = oldsize - SIZE_SZ; | |
3951 | s = (INTERNAL_SIZE_T*)(oldmem); | |
3952 | d = (INTERNAL_SIZE_T*)(newmem); | |
3953 | ncopies = copysize / sizeof(INTERNAL_SIZE_T); | |
3954 | assert(ncopies >= 3); | |
3955 | ||
3956 | if (ncopies > 9) | |
3957 | MALLOC_COPY(d, s, copysize); | |
3958 | ||
3959 | else { | |
3960 | *(d+0) = *(s+0); | |
3961 | *(d+1) = *(s+1); | |
3962 | *(d+2) = *(s+2); | |
3963 | if (ncopies > 4) { | |
3964 | *(d+3) = *(s+3); | |
3965 | *(d+4) = *(s+4); | |
3966 | if (ncopies > 6) { | |
3967 | *(d+5) = *(s+5); | |
3968 | *(d+6) = *(s+6); | |
3969 | if (ncopies > 8) { | |
3970 | *(d+7) = *(s+7); | |
3971 | *(d+8) = *(s+8); | |
3972 | } | |
3973 | } | |
3974 | } | |
3975 | } | |
3976 | ||
3977 | fREe(oldmem); | |
3978 | check_inuse_chunk(newp); | |
3979 | return chunk2mem(newp); | |
3980 | } | |
3981 | } | |
3982 | } | |
3983 | ||
3984 | /* If possible, free extra space in old or extended chunk */ | |
3985 | ||
3986 | assert((unsigned long)(newsize) >= (unsigned long)(nb)); | |
3987 | ||
3988 | remainder_size = newsize - nb; | |
3989 | ||
3990 | if (remainder_size < MINSIZE) { /* not enough extra to split off */ | |
3991 | set_head_size(newp, newsize); | |
3992 | set_inuse_bit_at_offset(newp, newsize); | |
3993 | } | |
3994 | else { /* split remainder */ | |
3995 | remainder = chunk_at_offset(newp, nb); | |
3996 | set_head_size(newp, nb); | |
3997 | set_head(remainder, remainder_size | PREV_INUSE); | |
3998 | /* Mark remainder as inuse so free() won't complain */ | |
3999 | set_inuse_bit_at_offset(remainder, remainder_size); | |
4000 | fREe(chunk2mem(remainder)); | |
4001 | } | |
4002 | ||
4003 | check_inuse_chunk(newp); | |
4004 | return chunk2mem(newp); | |
4005 | } | |
4006 | ||
4007 | /* | |
4008 | Handle mmap cases | |
4009 | */ | |
4010 | ||
4011 | else { | |
4012 | #if HAVE_MMAP | |
4013 | ||
4014 | #if HAVE_MREMAP | |
4015 | INTERNAL_SIZE_T offset = oldp->prev_size; | |
4016 | size_t pagemask = av->pagesize - 1; | |
4017 | char *cp; | |
4018 | unsigned long sum; | |
4019 | ||
4020 | /* Note the extra SIZE_SZ overhead */ | |
4021 | newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask; | |
4022 | ||
4023 | /* don't need to remap if still within same page */ | |
4024 | if (oldsize == newsize - offset) | |
4025 | return oldmem; | |
4026 | ||
4027 | cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1); | |
4028 | ||
4029 | if (cp != (char*)MORECORE_FAILURE) { | |
4030 | ||
4031 | newp = (mchunkptr)(cp + offset); | |
4032 | set_head(newp, (newsize - offset)|IS_MMAPPED); | |
4033 | ||
4034 | assert(aligned_OK(chunk2mem(newp))); | |
4035 | assert((newp->prev_size == offset)); | |
4036 | ||
4037 | /* update statistics */ | |
4038 | sum = av->mmapped_mem += newsize - oldsize; | |
4039 | if (sum > (unsigned long)(av->max_mmapped_mem)) | |
4040 | av->max_mmapped_mem = sum; | |
4041 | sum += av->sbrked_mem; | |
4042 | if (sum > (unsigned long)(av->max_total_mem)) | |
4043 | av->max_total_mem = sum; | |
4044 | ||
4045 | return chunk2mem(newp); | |
4046 | } | |
4047 | #endif | |
4048 | ||
4049 | /* Note the extra SIZE_SZ overhead. */ | |
4050 | if ((unsigned long)(oldsize) >= (unsigned long)(nb + SIZE_SZ)) | |
4051 | newmem = oldmem; /* do nothing */ | |
4052 | else { | |
4053 | /* Must alloc, copy, free. */ | |
4054 | newmem = mALLOc(nb - MALLOC_ALIGN_MASK); | |
4055 | if (newmem != 0) { | |
4056 | MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ); | |
4057 | fREe(oldmem); | |
4058 | } | |
4059 | } | |
4060 | return newmem; | |
4061 | ||
4062 | #else | |
4063 | /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */ | |
4064 | check_malloc_state(); | |
4065 | MALLOC_FAILURE_ACTION; | |
4066 | return 0; | |
4067 | #endif | |
4068 | } | |
4069 | } | |
4070 | ||
4071 | /* | |
4072 | ------------------------------ memalign ------------------------------ | |
4073 | */ | |
4074 | ||
4075 | #if __STD_C | |
4076 | Void_t* mEMALIGn(size_t alignment, size_t bytes) | |
4077 | #else | |
4078 | Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes; | |
4079 | #endif | |
4080 | { | |
4081 | INTERNAL_SIZE_T nb; /* padded request size */ | |
4082 | char* m; /* memory returned by malloc call */ | |
4083 | mchunkptr p; /* corresponding chunk */ | |
4084 | char* brk; /* alignment point within p */ | |
4085 | mchunkptr newp; /* chunk to return */ | |
4086 | INTERNAL_SIZE_T newsize; /* its size */ | |
4087 | INTERNAL_SIZE_T leadsize; /* leading space before alignment point */ | |
4088 | mchunkptr remainder; /* spare room at end to split off */ | |
4089 | unsigned long remainder_size; /* its size */ | |
4090 | INTERNAL_SIZE_T size; | |
4091 | ||
4092 | /* If need less alignment than we give anyway, just relay to malloc */ | |
4093 | ||
4094 | if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes); | |
4095 | ||
4096 | /* Otherwise, ensure that it is at least a minimum chunk size */ | |
4097 | ||
4098 | if (alignment < MINSIZE) alignment = MINSIZE; | |
4099 | ||
4100 | /* Make sure alignment is power of 2 (in case MINSIZE is not). */ | |
4101 | if ((alignment & (alignment - 1)) != 0) { | |
4102 | size_t a = MALLOC_ALIGNMENT * 2; | |
4103 | while ((unsigned long)a < (unsigned long)alignment) a <<= 1; | |
4104 | alignment = a; | |
4105 | } | |
4106 | ||
4107 | checked_request2size(bytes, nb); | |
4108 | ||
4109 | /* | |
4110 | Strategy: find a spot within that chunk that meets the alignment | |
4111 | request, and then possibly free the leading and trailing space. | |
4112 | */ | |
4113 | ||
4114 | ||
4115 | /* Call malloc with worst case padding to hit alignment. */ | |
4116 | ||
4117 | m = (char*)(mALLOc(nb + alignment + MINSIZE)); | |
4118 | ||
4119 | if (m == 0) return 0; /* propagate failure */ | |
4120 | ||
4121 | p = mem2chunk(m); | |
4122 | ||
4123 | if ((((unsigned long)(m)) % alignment) != 0) { /* misaligned */ | |
4124 | ||
4125 | /* | |
4126 | Find an aligned spot inside chunk. Since we need to give back | |
4127 | leading space in a chunk of at least MINSIZE, if the first | |
4128 | calculation places us at a spot with less than MINSIZE leader, | |
4129 | we can move to the next aligned spot -- we've allocated enough | |
4130 | total room so that this is always possible. | |
4131 | */ | |
4132 | ||
4133 | brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & | |
4134 | -((signed long) alignment)); | |
4135 | if ((unsigned long)(brk - (char*)(p)) < MINSIZE) | |
4136 | brk += alignment; | |
4137 | ||
4138 | newp = (mchunkptr)brk; | |
4139 | leadsize = brk - (char*)(p); | |
4140 | newsize = chunksize(p) - leadsize; | |
4141 | ||
4142 | /* For mmapped chunks, just adjust offset */ | |
4143 | if (chunk_is_mmapped(p)) { | |
4144 | newp->prev_size = p->prev_size + leadsize; | |
4145 | set_head(newp, newsize|IS_MMAPPED); | |
4146 | return chunk2mem(newp); | |
4147 | } | |
4148 | ||
4149 | /* Otherwise, give back leader, use the rest */ | |
4150 | set_head(newp, newsize | PREV_INUSE); | |
4151 | set_inuse_bit_at_offset(newp, newsize); | |
4152 | set_head_size(p, leadsize); | |
4153 | fREe(chunk2mem(p)); | |
4154 | p = newp; | |
4155 | ||
4156 | assert (newsize >= nb && | |
4157 | (((unsigned long)(chunk2mem(p))) % alignment) == 0); | |
4158 | } | |
4159 | ||
4160 | /* Also give back spare room at the end */ | |
4161 | if (!chunk_is_mmapped(p)) { | |
4162 | size = chunksize(p); | |
4163 | if ((unsigned long)(size) > (unsigned long)(nb + MINSIZE)) { | |
4164 | remainder_size = size - nb; | |
4165 | remainder = chunk_at_offset(p, nb); | |
4166 | set_head(remainder, remainder_size | PREV_INUSE); | |
4167 | set_head_size(p, nb); | |
4168 | fREe(chunk2mem(remainder)); | |
4169 | } | |
4170 | } | |
4171 | ||
4172 | check_inuse_chunk(p); | |
4173 | return chunk2mem(p); | |
4174 | } | |
4175 | ||
4176 | /* | |
4177 | ------------------------------ calloc ------------------------------ | |
4178 | */ | |
4179 | ||
4180 | #if __STD_C | |
4181 | Void_t* cALLOc(size_t n_elements, size_t elem_size) | |
4182 | #else | |
4183 | Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size; | |
4184 | #endif | |
4185 | { | |
4186 | mchunkptr p; | |
4187 | unsigned long clearsize; | |
4188 | unsigned long nclears; | |
4189 | INTERNAL_SIZE_T* d; | |
4190 | ||
4191 | Void_t* mem = mALLOc(n_elements * elem_size); | |
4192 | ||
4193 | if (mem != 0) { | |
4194 | p = mem2chunk(mem); | |
4195 | ||
4196 | #if MMAP_CLEARS | |
4197 | if (!chunk_is_mmapped(p)) /* don't need to clear mmapped space */ | |
4198 | #endif | |
4199 | { | |
4200 | /* | |
4201 | Unroll clear of <= 36 bytes (72 if 8byte sizes) | |
4202 | We know that contents have an odd number of | |
4203 | INTERNAL_SIZE_T-sized words; minimally 3. | |
4204 | */ | |
4205 | ||
4206 | d = (INTERNAL_SIZE_T*)mem; | |
4207 | clearsize = chunksize(p) - SIZE_SZ; | |
4208 | nclears = clearsize / sizeof(INTERNAL_SIZE_T); | |
4209 | assert(nclears >= 3); | |
4210 | ||
4211 | if (nclears > 9) | |
4212 | MALLOC_ZERO(d, clearsize); | |
4213 | ||
4214 | else { | |
4215 | *(d+0) = 0; | |
4216 | *(d+1) = 0; | |
4217 | *(d+2) = 0; | |
4218 | if (nclears > 4) { | |
4219 | *(d+3) = 0; | |
4220 | *(d+4) = 0; | |
4221 | if (nclears > 6) { | |
4222 | *(d+5) = 0; | |
4223 | *(d+6) = 0; | |
4224 | if (nclears > 8) { | |
4225 | *(d+7) = 0; | |
4226 | *(d+8) = 0; | |
4227 | } | |
4228 | } | |
4229 | } | |
4230 | } | |
4231 | } | |
4232 | } | |
4233 | return mem; | |
4234 | } | |
4235 | ||
4236 | /* | |
4237 | ------------------------------ cfree ------------------------------ | |
4238 | */ | |
4239 | ||
4240 | #if __STD_C | |
4241 | void cFREe(Void_t *mem) | |
4242 | #else | |
4243 | void cFREe(mem) Void_t *mem; | |
4244 | #endif | |
4245 | { | |
4246 | fREe(mem); | |
4247 | } | |
4248 | ||
4249 | /* | |
4250 | ------------------------- independent_calloc ------------------------- | |
4251 | */ | |
4252 | ||
4253 | #if __STD_C | |
4254 | Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[]) | |
4255 | #else | |
4256 | Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements; size_t elem_size; Void_t* chunks[]; | |
4257 | #endif | |
4258 | { | |
4259 | size_t sz = elem_size; /* serves as 1-element array */ | |
4260 | /* opts arg of 3 means all elements are same size, and should be cleared */ | |
4261 | return iALLOc(n_elements, &sz, 3, chunks); | |
4262 | } | |
4263 | ||
4264 | /* | |
4265 | ------------------------- independent_comalloc ------------------------- | |
4266 | */ | |
4267 | ||
4268 | #if __STD_C | |
4269 | Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[]) | |
4270 | #else | |
4271 | Void_t** iCOMALLOc(n_elements, sizes, chunks) size_t n_elements; size_t sizes[]; Void_t* chunks[]; | |
4272 | #endif | |
4273 | { | |
4274 | return iALLOc(n_elements, sizes, 0, chunks); | |
4275 | } | |
4276 | ||
4277 | ||
4278 | /* | |
4279 | ------------------------------ ialloc ------------------------------ | |
4280 | ialloc provides common support for independent_X routines, handling all of | |
4281 | the combinations that can result. | |
4282 | ||
4283 | The opts arg has: | |
4284 | bit 0 set if all elements are same size (using sizes[0]) | |
4285 | bit 1 set if elements should be zeroed | |
4286 | */ | |
4287 | ||
4288 | ||
4289 | #if __STD_C | |
4290 | static Void_t** iALLOc(size_t n_elements, | |
4291 | size_t* sizes, | |
4292 | int opts, | |
4293 | Void_t* chunks[]) | |
4294 | #else | |
4295 | static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements; size_t* sizes; int opts; Void_t* chunks[]; | |
4296 | #endif | |
4297 | { | |
4298 | mstate av = get_malloc_state(); | |
4299 | INTERNAL_SIZE_T element_size; /* chunksize of each element, if all same */ | |
4300 | INTERNAL_SIZE_T contents_size; /* total size of elements */ | |
4301 | INTERNAL_SIZE_T array_size; /* request size of pointer array */ | |
4302 | Void_t* mem; /* malloced aggregate space */ | |
4303 | mchunkptr p; /* corresponding chunk */ | |
4304 | INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */ | |
4305 | Void_t** marray; /* either "chunks" or malloced ptr array */ | |
4306 | mchunkptr array_chunk; /* chunk for malloced ptr array */ | |
4307 | int mmx; /* to disable mmap */ | |
4308 | INTERNAL_SIZE_T size; | |
4309 | size_t i; | |
4310 | ||
4311 | /* Ensure initialization/consolidation */ | |
4312 | if (have_fastchunks(av)) malloc_consolidate(av); | |
4313 | ||
4314 | /* compute array length, if needed */ | |
4315 | if (chunks != 0) { | |
4316 | if (n_elements == 0) | |
4317 | return chunks; /* nothing to do */ | |
4318 | marray = chunks; | |
4319 | array_size = 0; | |
4320 | } | |
4321 | else { | |
4322 | /* if empty req, must still return chunk representing empty array */ | |
4323 | if (n_elements == 0) | |
4324 | return (Void_t**) mALLOc(0); | |
4325 | marray = 0; | |
4326 | array_size = request2size(n_elements * (sizeof(Void_t*))); | |
4327 | } | |
4328 | ||
4329 | /* compute total element size */ | |
4330 | if (opts & 0x1) { /* all-same-size */ | |
4331 | element_size = request2size(*sizes); | |
4332 | contents_size = n_elements * element_size; | |
4333 | } | |
4334 | else { /* add up all the sizes */ | |
4335 | element_size = 0; | |
4336 | contents_size = 0; | |
4337 | for (i = 0; i != n_elements; ++i) | |
4338 | contents_size += request2size(sizes[i]); | |
4339 | } | |
4340 | ||
4341 | /* subtract out alignment bytes from total to minimize overallocation */ | |
4342 | size = contents_size + array_size - MALLOC_ALIGN_MASK; | |
4343 | ||
4344 | /* | |
4345 | Allocate the aggregate chunk. | |
4346 | But first disable mmap so malloc won't use it, since | |
4347 | we would not be able to later free/realloc space internal | |
4348 | to a segregated mmap region. | |
4349 | */ | |
4350 | mmx = av->n_mmaps_max; /* disable mmap */ | |
4351 | av->n_mmaps_max = 0; | |
4352 | mem = mALLOc(size); | |
4353 | av->n_mmaps_max = mmx; /* reset mmap */ | |
4354 | if (mem == 0) | |
4355 | return 0; | |
4356 | ||
4357 | p = mem2chunk(mem); | |
4358 | assert(!chunk_is_mmapped(p)); | |
4359 | remainder_size = chunksize(p); | |
4360 | ||
4361 | if (opts & 0x2) { /* optionally clear the elements */ | |
4362 | MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size); | |
4363 | } | |
4364 | ||
4365 | /* If not provided, allocate the pointer array as final part of chunk */ | |
4366 | if (marray == 0) { | |
4367 | array_chunk = chunk_at_offset(p, contents_size); | |
4368 | marray = (Void_t**) (chunk2mem(array_chunk)); | |
4369 | set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE); | |
4370 | remainder_size = contents_size; | |
4371 | } | |
4372 | ||
4373 | /* split out elements */ | |
4374 | for (i = 0; ; ++i) { | |
4375 | marray[i] = chunk2mem(p); | |
4376 | if (i != n_elements-1) { | |
4377 | if (element_size != 0) | |
4378 | size = element_size; | |
4379 | else | |
4380 | size = request2size(sizes[i]); | |
4381 | remainder_size -= size; | |
4382 | set_head(p, size | PREV_INUSE); | |
4383 | p = chunk_at_offset(p, size); | |
4384 | } | |
4385 | else { /* the final element absorbs any overallocation slop */ | |
4386 | set_head(p, remainder_size | PREV_INUSE); | |
4387 | break; | |
4388 | } | |
4389 | } | |
4390 | ||
4391 | #if DEBUG | |
4392 | if (marray != chunks) { | |
4393 | /* final element must have exactly exhausted chunk */ | |
4394 | if (element_size != 0) | |
4395 | assert(remainder_size == element_size); | |
4396 | else | |
4397 | assert(remainder_size == request2size(sizes[i])); | |
4398 | check_inuse_chunk(mem2chunk(marray)); | |
4399 | } | |
4400 | ||
4401 | for (i = 0; i != n_elements; ++i) | |
4402 | check_inuse_chunk(mem2chunk(marray[i])); | |
4403 | #endif | |
4404 | ||
4405 | return marray; | |
4406 | } | |
4407 | ||
4408 | ||
4409 | /* | |
4410 | ------------------------------ valloc ------------------------------ | |
4411 | */ | |
4412 | ||
4413 | #if __STD_C | |
4414 | Void_t* vALLOc(size_t bytes) | |
4415 | #else | |
4416 | Void_t* vALLOc(bytes) size_t bytes; | |
4417 | #endif | |
4418 | { | |
4419 | /* Ensure initialization/consolidation */ | |
4420 | mstate av = get_malloc_state(); | |
4421 | if (have_fastchunks(av)) malloc_consolidate(av); | |
4422 | return mEMALIGn(av->pagesize, bytes); | |
4423 | } | |
4424 | ||
4425 | /* | |
4426 | ------------------------------ pvalloc ------------------------------ | |
4427 | */ | |
4428 | ||
4429 | ||
4430 | #if __STD_C | |
4431 | Void_t* pVALLOc(size_t bytes) | |
4432 | #else | |
4433 | Void_t* pVALLOc(bytes) size_t bytes; | |
4434 | #endif | |
4435 | { | |
4436 | mstate av = get_malloc_state(); | |
4437 | size_t pagesz; | |
4438 | ||
4439 | /* Ensure initialization/consolidation */ | |
4440 | if (have_fastchunks(av)) malloc_consolidate(av); | |
4441 | pagesz = av->pagesize; | |
4442 | return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1)); | |
4443 | } | |
4444 | ||
4445 | ||
4446 | /* | |
4447 | ------------------------------ malloc_trim ------------------------------ | |
4448 | */ | |
4449 | ||
4450 | #if __STD_C | |
4451 | int mTRIm(size_t pad) | |
4452 | #else | |
4453 | int mTRIm(pad) size_t pad; | |
4454 | #endif | |
4455 | { | |
4456 | mstate av = get_malloc_state(); | |
4457 | /* Ensure initialization/consolidation */ | |
4458 | malloc_consolidate(av); | |
4459 | ||
4460 | #ifndef MORECORE_CANNOT_TRIM | |
4461 | return sYSTRIm(pad, av); | |
4462 | #else | |
4463 | return 0; | |
4464 | #endif | |
4465 | } | |
4466 | ||
4467 | ||
4468 | /* | |
4469 | ------------------------- malloc_usable_size ------------------------- | |
4470 | */ | |
4471 | ||
4472 | #if __STD_C | |
4473 | size_t mUSABLe(Void_t* mem) | |
4474 | #else | |
4475 | size_t mUSABLe(mem) Void_t* mem; | |
4476 | #endif | |
4477 | { | |
4478 | mchunkptr p; | |
4479 | if (mem != 0) { | |
4480 | p = mem2chunk(mem); | |
4481 | if (chunk_is_mmapped(p)) | |
4482 | return chunksize(p) - 2*SIZE_SZ; | |
4483 | else if (inuse(p)) | |
4484 | return chunksize(p) - SIZE_SZ; | |
4485 | } | |
4486 | return 0; | |
4487 | } | |
4488 | ||
4489 | /* | |
4490 | ------------------------------ mallinfo ------------------------------ | |
4491 | */ | |
4492 | ||
4493 | struct mallinfo mALLINFo() | |
4494 | { | |
4495 | mstate av = get_malloc_state(); | |
4496 | struct mallinfo mi; | |
4497 | int i; | |
4498 | mbinptr b; | |
4499 | mchunkptr p; | |
4500 | INTERNAL_SIZE_T avail; | |
4501 | INTERNAL_SIZE_T fastavail; | |
4502 | int nblocks; | |
4503 | int nfastblocks; | |
4504 | ||
4505 | /* Ensure initialization */ | |
4506 | if (av->top == 0) malloc_consolidate(av); | |
4507 | ||
4508 | check_malloc_state(); | |
4509 | ||
4510 | /* Account for top */ | |
4511 | avail = chunksize(av->top); | |
4512 | nblocks = 1; /* top always exists */ | |
4513 | ||
4514 | /* traverse fastbins */ | |
4515 | nfastblocks = 0; | |
4516 | fastavail = 0; | |
4517 | ||
4518 | for (i = 0; i < NFASTBINS; ++i) { | |
4519 | for (p = av->fastbins[i]; p != 0; p = p->fd) { | |
4520 | ++nfastblocks; | |
4521 | fastavail += chunksize(p); | |
4522 | } | |
4523 | } | |
4524 | ||
4525 | avail += fastavail; | |
4526 | ||
4527 | /* traverse regular bins */ | |
4528 | for (i = 1; i < NBINS; ++i) { | |
4529 | b = bin_at(av, i); | |
4530 | for (p = last(b); p != b; p = p->bk) { | |
4531 | ++nblocks; | |
4532 | avail += chunksize(p); | |
4533 | } | |
4534 | } | |
4535 | ||
4536 | mi.smblks = nfastblocks; | |
4537 | mi.ordblks = nblocks; | |
4538 | mi.fordblks = avail; | |
4539 | mi.uordblks = av->sbrked_mem - avail; | |
4540 | mi.arena = av->sbrked_mem; | |
4541 | mi.hblks = av->n_mmaps; | |
4542 | mi.hblkhd = av->mmapped_mem; | |
4543 | mi.fsmblks = fastavail; | |
4544 | mi.keepcost = chunksize(av->top); | |
4545 | mi.usmblks = av->max_total_mem; | |
4546 | return mi; | |
4547 | } | |
4548 | ||
4549 | /* | |
4550 | ------------------------------ malloc_stats ------------------------------ | |
4551 | */ | |
4552 | ||
4553 | void mSTATs() | |
4554 | { | |
4555 | struct mallinfo mi = mALLINFo(); | |
4556 | ||
4557 | #ifdef WIN32 | |
4558 | { | |
4559 | unsigned long free, reserved, committed; | |
4560 | vminfo (&free, &reserved, &committed); | |
4561 | fprintf(stderr, "free bytes = %10lu\n", | |
4562 | free); | |
4563 | fprintf(stderr, "reserved bytes = %10lu\n", | |
4564 | reserved); | |
4565 | fprintf(stderr, "committed bytes = %10lu\n", | |
4566 | committed); | |
4567 | } | |
4568 | #endif | |
4569 | ||
4570 | ||
4571 | fprintf(stderr, "max system bytes = %10lu\n", | |
4572 | (unsigned long)(mi.usmblks)); | |
4573 | fprintf(stderr, "system bytes = %10lu\n", | |
4574 | (unsigned long)(mi.arena + mi.hblkhd)); | |
4575 | fprintf(stderr, "in use bytes = %10lu\n", | |
4576 | (unsigned long)(mi.uordblks + mi.hblkhd)); | |
4577 | ||
4578 | ||
4579 | #ifdef WIN32 | |
4580 | { | |
4581 | unsigned long kernel, user; | |
4582 | if (cpuinfo (TRUE, &kernel, &user)) { | |
4583 | fprintf(stderr, "kernel ms = %10lu\n", | |
4584 | kernel); | |
4585 | fprintf(stderr, "user ms = %10lu\n", | |
4586 | user); | |
4587 | } | |
4588 | } | |
4589 | #endif | |
4590 | } | |
4591 | ||
4592 | ||
4593 | /* | |
4594 | ------------------------------ mallopt ------------------------------ | |
4595 | */ | |
4596 | ||
4597 | #if __STD_C | |
4598 | int mALLOPt(int param_number, int value) | |
4599 | #else | |
4600 | int mALLOPt(param_number, value) int param_number; int value; | |
4601 | #endif | |
4602 | { | |
4603 | mstate av = get_malloc_state(); | |
4604 | /* Ensure initialization/consolidation */ | |
4605 | malloc_consolidate(av); | |
4606 | ||
4607 | switch(param_number) { | |
4608 | case M_MXFAST: | |
4609 | if (value >= 0 && value <= MAX_FAST_SIZE) { | |
4610 | set_max_fast(av, value); | |
4611 | return 1; | |
4612 | } | |
4613 | else | |
4614 | return 0; | |
4615 | ||
4616 | case M_TRIM_THRESHOLD: | |
4617 | av->trim_threshold = value; | |
4618 | return 1; | |
4619 | ||
4620 | case M_TOP_PAD: | |
4621 | av->top_pad = value; | |
4622 | return 1; | |
4623 | ||
4624 | case M_MMAP_THRESHOLD: | |
4625 | av->mmap_threshold = value; | |
4626 | return 1; | |
4627 | ||
4628 | case M_MMAP_MAX: | |
4629 | #if !HAVE_MMAP | |
4630 | if (value != 0) | |
4631 | return 0; | |
4632 | #endif | |
4633 | av->n_mmaps_max = value; | |
4634 | return 1; | |
4635 | ||
4636 | default: | |
4637 | return 0; | |
4638 | } | |
4639 | } | |
4640 | ||
4641 | ||
4642 | /* | |
4643 | -------------------- Alternative MORECORE functions -------------------- | |
4644 | */ | |
4645 | ||
4646 | ||
4647 | /* | |
4648 | General Requirements for MORECORE. | |
4649 | ||
4650 | The MORECORE function must have the following properties: | |
4651 | ||
4652 | If MORECORE_CONTIGUOUS is false: | |
4653 | ||
4654 | * MORECORE must allocate in multiples of pagesize. It will | |
4655 | only be called with arguments that are multiples of pagesize. | |
4656 | ||
4657 | * MORECORE(0) must return an address that is at least | |
4658 | MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.) | |
4659 | ||
4660 | else (i.e. If MORECORE_CONTIGUOUS is true): | |
4661 | ||
4662 | * Consecutive calls to MORECORE with positive arguments | |
4663 | return increasing addresses, indicating that space has been | |
4664 | contiguously extended. | |
4665 | ||
4666 | * MORECORE need not allocate in multiples of pagesize. | |
4667 | Calls to MORECORE need not have args of multiples of pagesize. | |
4668 | ||
4669 | * MORECORE need not page-align. | |
4670 | ||
4671 | In either case: | |
4672 | ||
4673 | * MORECORE may allocate more memory than requested. (Or even less, | |
4674 | but this will generally result in a malloc failure.) | |
4675 | ||
4676 | * MORECORE must not allocate memory when given argument zero, but | |
4677 | instead return one past the end address of memory from previous | |
4678 | nonzero call. This malloc does NOT call MORECORE(0) | |
4679 | until at least one call with positive arguments is made, so | |
4680 | the initial value returned is not important. | |
4681 | ||
4682 | * Even though consecutive calls to MORECORE need not return contiguous | |
4683 | addresses, it must be OK for malloc'ed chunks to span multiple | |
4684 | regions in those cases where they do happen to be contiguous. | |
4685 | ||
4686 | * MORECORE need not handle negative arguments -- it may instead | |
4687 | just return MORECORE_FAILURE when given negative arguments. | |
4688 | Negative arguments are always multiples of pagesize. MORECORE | |
4689 | must not misinterpret negative args as large positive unsigned | |
4690 | args. You can suppress all such calls from even occurring by defining | |
4691 | MORECORE_CANNOT_TRIM, | |
4692 | ||
4693 | There is some variation across systems about the type of the | |
4694 | argument to sbrk/MORECORE. If size_t is unsigned, then it cannot | |
4695 | actually be size_t, because sbrk supports negative args, so it is | |
4696 | normally the signed type of the same width as size_t (sometimes | |
4697 | declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much | |
4698 | matter though. Internally, we use "long" as arguments, which should | |
4699 | work across all reasonable possibilities. | |
4700 | ||
4701 | Additionally, if MORECORE ever returns failure for a positive | |
4702 | request, and HAVE_MMAP is true, then mmap is used as a noncontiguous | |
4703 | system allocator. This is a useful backup strategy for systems with | |
4704 | holes in address spaces -- in this case sbrk cannot contiguously | |
4705 | expand the heap, but mmap may be able to map noncontiguous space. | |
4706 | ||
4707 | If you'd like mmap to ALWAYS be used, you can define MORECORE to be | |
4708 | a function that always returns MORECORE_FAILURE. | |
4709 | ||
4710 | If you are using this malloc with something other than sbrk (or its | |
4711 | emulation) to supply memory regions, you probably want to set | |
4712 | MORECORE_CONTIGUOUS as false. As an example, here is a custom | |
4713 | allocator kindly contributed for pre-OSX macOS. It uses virtually | |
4714 | but not necessarily physically contiguous non-paged memory (locked | |
4715 | in, present and won't get swapped out). You can use it by | |
4716 | uncommenting this section, adding some #includes, and setting up the | |
4717 | appropriate defines above: | |
4718 | ||
4719 | #define MORECORE osMoreCore | |
4720 | #define MORECORE_CONTIGUOUS 0 | |
4721 | ||
4722 | There is also a shutdown routine that should somehow be called for | |
4723 | cleanup upon program exit. | |
4724 | ||
4725 | #define MAX_POOL_ENTRIES 100 | |
4726 | #define MINIMUM_MORECORE_SIZE (64 * 1024) | |
4727 | static int next_os_pool; | |
4728 | void *our_os_pools[MAX_POOL_ENTRIES]; | |
4729 | ||
4730 | void *osMoreCore(int size) | |
4731 | { | |
4732 | void *ptr = 0; | |
4733 | static void *sbrk_top = 0; | |
4734 | ||
4735 | if (size > 0) | |
4736 | { | |
4737 | if (size < MINIMUM_MORECORE_SIZE) | |
4738 | size = MINIMUM_MORECORE_SIZE; | |
4739 | if (CurrentExecutionLevel() == kTaskLevel) | |
4740 | ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); | |
4741 | if (ptr == 0) | |
4742 | { | |
4743 | return (void *) MORECORE_FAILURE; | |
4744 | } | |
4745 | // save ptrs so they can be freed during cleanup | |
4746 | our_os_pools[next_os_pool] = ptr; | |
4747 | next_os_pool++; | |
4748 | ptr = (void *) ((((unsigned long) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); | |
4749 | sbrk_top = (char *) ptr + size; | |
4750 | return ptr; | |
4751 | } | |
4752 | else if (size < 0) | |
4753 | { | |
4754 | // we don't currently support shrink behavior | |
4755 | return (void *) MORECORE_FAILURE; | |
4756 | } | |
4757 | else | |
4758 | { | |
4759 | return sbrk_top; | |
4760 | } | |
4761 | } | |
4762 | ||
4763 | // cleanup any allocated memory pools | |
4764 | // called as last thing before shutting down driver | |
4765 | ||
4766 | void osCleanupMem(void) | |
4767 | { | |
4768 | void **ptr; | |
4769 | ||
4770 | for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) | |
4771 | if (*ptr) | |
4772 | { | |
4773 | PoolDeallocate(*ptr); | |
4774 | *ptr = 0; | |
4775 | } | |
4776 | } | |
4777 | ||
4778 | */ | |
4779 | ||
4780 | ||
4781 | /* | |
4782 | -------------------------------------------------------------- | |
4783 | ||
4784 | Emulation of sbrk for win32. | |
4785 | Donated by J. Walter <Walter@GeNeSys-e.de>. | |
4786 | For additional information about this code, and malloc on Win32, see | |
4787 | http://www.genesys-e.de/jwalter/ | |
4788 | */ | |
4789 | ||
4790 | ||
4791 | #ifdef WIN32 | |
4792 | #define TRACE | |
4793 | ||
4794 | #ifdef _DEBUG | |
4795 | /* #define TRACE */ | |
4796 | #endif | |
4797 | ||
4798 | /* Support for USE_MALLOC_LOCK */ | |
4799 | #ifdef USE_MALLOC_LOCK | |
4800 | ||
4801 | /* Wait for spin lock */ | |
4802 | static int slwait (int *sl) { | |
4803 | while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0) | |
4804 | Sleep (0); | |
4805 | return 0; | |
4806 | } | |
4807 | ||
4808 | /* Release spin lock */ | |
4809 | static int slrelease (int *sl) { | |
4810 | InterlockedExchange (sl, 0); | |
4811 | return 0; | |
4812 | } | |
4813 | ||
4814 | #ifdef NEEDED | |
4815 | /* Spin lock for emulation code */ | |
4816 | static int g_sl; | |
4817 | #endif | |
4818 | ||
4819 | #endif /* USE_MALLOC_LOCK */ | |
4820 | ||
4821 | /* getpagesize for windows */ | |
4822 | static long getpagesize (void) { | |
4823 | static long g_pagesize = 0; | |
4824 | if (! g_pagesize) { | |
4825 | SYSTEM_INFO system_info; | |
4826 | GetSystemInfo (&system_info); | |
4827 | g_pagesize = system_info.dwPageSize; | |
4828 | } | |
4829 | return g_pagesize; | |
4830 | } | |
4831 | static long getregionsize (void) { | |
4832 | static long g_regionsize = 0; | |
4833 | if (! g_regionsize) { | |
4834 | SYSTEM_INFO system_info; | |
4835 | GetSystemInfo (&system_info); | |
4836 | g_regionsize = system_info.dwAllocationGranularity; | |
4837 | } | |
4838 | return g_regionsize; | |
4839 | } | |
4840 | ||
4841 | /* A region list entry */ | |
4842 | typedef struct _region_list_entry { | |
4843 | void *top_allocated; | |
4844 | void *top_committed; | |
4845 | void *top_reserved; | |
4846 | long reserve_size; | |
4847 | struct _region_list_entry *previous; | |
4848 | } region_list_entry; | |
4849 | ||
4850 | /* Allocate and link a region entry in the region list */ | |
4851 | static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) { | |
4852 | region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry)); | |
4853 | if (! next) | |
4854 | return FALSE; | |
4855 | next->top_allocated = (char *) base_reserved; | |
4856 | next->top_committed = (char *) base_reserved; | |
4857 | next->top_reserved = (char *) base_reserved + reserve_size; | |
4858 | next->reserve_size = reserve_size; | |
4859 | next->previous = *last; | |
4860 | *last = next; | |
4861 | return TRUE; | |
4862 | } | |
4863 | /* Free and unlink the last region entry from the region list */ | |
4864 | static int region_list_remove (region_list_entry **last) { | |
4865 | region_list_entry *previous = (*last)->previous; | |
4866 | if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last)) | |
4867 | return FALSE; | |
4868 | *last = previous; | |
4869 | return TRUE; | |
4870 | } | |
4871 | ||
4872 | #define CEIL(size,to) (((size)+(to)-1)&~((to)-1)) | |
4873 | #define FLOOR(size,to) ((size)&~((to)-1)) | |
4874 | ||
4875 | #define SBRK_SCALE 0 | |
4876 | /* #define SBRK_SCALE 1 */ | |
4877 | /* #define SBRK_SCALE 2 */ | |
4878 | /* #define SBRK_SCALE 4 */ | |
4879 | ||
4880 | /* sbrk for windows */ | |
4881 | static void *sbrk (long size) { | |
4882 | static long g_pagesize, g_my_pagesize; | |
4883 | static long g_regionsize, g_my_regionsize; | |
4884 | static region_list_entry *g_last; | |
4885 | void *result = (void *) MORECORE_FAILURE; | |
4886 | #ifdef TRACE | |
4887 | printf ("sbrk %ld\n", size); | |
4888 | #endif | |
4889 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) | |
4890 | /* Wait for spin lock */ | |
4891 | slwait (&g_sl); | |
4892 | #endif | |
4893 | /* First time initialization */ | |
4894 | if (! g_pagesize) { | |
4895 | g_pagesize = getpagesize (); | |
4896 | g_my_pagesize = g_pagesize << SBRK_SCALE; | |
4897 | } | |
4898 | if (! g_regionsize) { | |
4899 | g_regionsize = getregionsize (); | |
4900 | g_my_regionsize = g_regionsize << SBRK_SCALE; | |
4901 | } | |
4902 | if (! g_last) { | |
4903 | if (! region_list_append (&g_last, 0, 0)) | |
4904 | goto sbrk_exit; | |
4905 | } | |
4906 | /* Assert invariants */ | |
4907 | assert (g_last); | |
4908 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated && | |
4909 | g_last->top_allocated <= g_last->top_committed); | |
4910 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed && | |
4911 | g_last->top_committed <= g_last->top_reserved && | |
4912 | (unsigned) g_last->top_committed % g_pagesize == 0); | |
4913 | assert ((unsigned) g_last->top_reserved % g_regionsize == 0); | |
4914 | assert ((unsigned) g_last->reserve_size % g_regionsize == 0); | |
4915 | /* Allocation requested? */ | |
4916 | if (size >= 0) { | |
4917 | /* Allocation size is the requested size */ | |
4918 | long allocate_size = size; | |
4919 | /* Compute the size to commit */ | |
4920 | long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed; | |
4921 | /* Do we reach the commit limit? */ | |
4922 | if (to_commit > 0) { | |
4923 | /* Round size to commit */ | |
4924 | long commit_size = CEIL (to_commit, g_my_pagesize); | |
4925 | /* Compute the size to reserve */ | |
4926 | long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved; | |
4927 | /* Do we reach the reserve limit? */ | |
4928 | if (to_reserve > 0) { | |
4929 | /* Compute the remaining size to commit in the current region */ | |
4930 | long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed; | |
4931 | if (remaining_commit_size > 0) { | |
4932 | /* Assert preconditions */ | |
4933 | assert ((unsigned) g_last->top_committed % g_pagesize == 0); | |
4934 | assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); { | |
4935 | /* Commit this */ | |
4936 | void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size, | |
4937 | MEM_COMMIT, PAGE_READWRITE); | |
4938 | /* Check returned pointer for consistency */ | |
4939 | if (base_committed != g_last->top_committed) | |
4940 | goto sbrk_exit; | |
4941 | /* Assert postconditions */ | |
4942 | assert ((unsigned) base_committed % g_pagesize == 0); | |
4943 | #ifdef TRACE | |
4944 | printf ("Commit %p %ld\n", base_committed, remaining_commit_size); | |
4945 | #endif | |
4946 | /* Adjust the regions commit top */ | |
4947 | g_last->top_committed = (char *) base_committed + remaining_commit_size; | |
4948 | } | |
4949 | } { | |
4950 | /* Now we are going to search and reserve. */ | |
4951 | int contiguous = -1; | |
4952 | int found = FALSE; | |
4953 | MEMORY_BASIC_INFORMATION memory_info; | |
4954 | void *base_reserved; | |
4955 | long reserve_size; | |
4956 | do { | |
4957 | /* Assume contiguous memory */ | |
4958 | contiguous = TRUE; | |
4959 | /* Round size to reserve */ | |
4960 | reserve_size = CEIL (to_reserve, g_my_regionsize); | |
4961 | /* Start with the current region's top */ | |
4962 | memory_info.BaseAddress = g_last->top_reserved; | |
4963 | /* Assert preconditions */ | |
4964 | assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0); | |
4965 | assert (0 < reserve_size && reserve_size % g_regionsize == 0); | |
4966 | while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) { | |
4967 | /* Assert postconditions */ | |
4968 | assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0); | |
4969 | #ifdef TRACE | |
4970 | printf ("Query %p %ld %s\n", memory_info.BaseAddress, memory_info.RegionSize, | |
4971 | memory_info.State == MEM_FREE ? "FREE": | |
4972 | (memory_info.State == MEM_RESERVE ? "RESERVED": | |
4973 | (memory_info.State == MEM_COMMIT ? "COMMITTED": "?"))); | |
4974 | #endif | |
4975 | /* Region is free, well aligned and big enough: we are done */ | |
4976 | if (memory_info.State == MEM_FREE && | |
4977 | (unsigned) memory_info.BaseAddress % g_regionsize == 0 && | |
4978 | memory_info.RegionSize >= (unsigned) reserve_size) { | |
4979 | found = TRUE; | |
4980 | break; | |
4981 | } | |
4982 | /* From now on we can't get contiguous memory! */ | |
4983 | contiguous = FALSE; | |
4984 | /* Recompute size to reserve */ | |
4985 | reserve_size = CEIL (allocate_size, g_my_regionsize); | |
4986 | memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize; | |
4987 | /* Assert preconditions */ | |
4988 | assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0); | |
4989 | assert (0 < reserve_size && reserve_size % g_regionsize == 0); | |
4990 | } | |
4991 | /* Search failed? */ | |
4992 | if (! found) | |
4993 | goto sbrk_exit; | |
4994 | /* Assert preconditions */ | |
4995 | assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0); | |
4996 | assert (0 < reserve_size && reserve_size % g_regionsize == 0); | |
4997 | /* Try to reserve this */ | |
4998 | base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size, | |
4999 | MEM_RESERVE, PAGE_NOACCESS); | |
5000 | if (! base_reserved) { | |
5001 | int rc = GetLastError (); | |
5002 | if (rc != ERROR_INVALID_ADDRESS) | |
5003 | goto sbrk_exit; | |
5004 | } | |
5005 | /* A null pointer signals (hopefully) a race condition with another thread. */ | |
5006 | /* In this case, we try again. */ | |
5007 | } while (! base_reserved); | |
5008 | /* Check returned pointer for consistency */ | |
5009 | if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress) | |
5010 | goto sbrk_exit; | |
5011 | /* Assert postconditions */ | |
5012 | assert ((unsigned) base_reserved % g_regionsize == 0); | |
5013 | #ifdef TRACE | |
5014 | printf ("Reserve %p %ld\n", base_reserved, reserve_size); | |
5015 | #endif | |
5016 | /* Did we get contiguous memory? */ | |
5017 | if (contiguous) { | |
5018 | long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated; | |
5019 | /* Adjust allocation size */ | |
5020 | allocate_size -= start_size; | |
5021 | /* Adjust the regions allocation top */ | |
5022 | g_last->top_allocated = g_last->top_committed; | |
5023 | /* Recompute the size to commit */ | |
5024 | to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed; | |
5025 | /* Round size to commit */ | |
5026 | commit_size = CEIL (to_commit, g_my_pagesize); | |
5027 | } | |
5028 | /* Append the new region to the list */ | |
5029 | if (! region_list_append (&g_last, base_reserved, reserve_size)) | |
5030 | goto sbrk_exit; | |
5031 | /* Didn't we get contiguous memory? */ | |
5032 | if (! contiguous) { | |
5033 | /* Recompute the size to commit */ | |
5034 | to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed; | |
5035 | /* Round size to commit */ | |
5036 | commit_size = CEIL (to_commit, g_my_pagesize); | |
5037 | } | |
5038 | } | |
5039 | } | |
5040 | /* Assert preconditions */ | |
5041 | assert ((unsigned) g_last->top_committed % g_pagesize == 0); | |
5042 | assert (0 < commit_size && commit_size % g_pagesize == 0); { | |
5043 | /* Commit this */ | |
5044 | void *base_committed = VirtualAlloc (g_last->top_committed, commit_size, | |
5045 | MEM_COMMIT, PAGE_READWRITE); | |
5046 | /* Check returned pointer for consistency */ | |
5047 | if (base_committed != g_last->top_committed) | |
5048 | goto sbrk_exit; | |
5049 | /* Assert postconditions */ | |
5050 | assert ((unsigned) base_committed % g_pagesize == 0); | |
5051 | #ifdef TRACE | |
5052 | printf ("Commit %p %ld\n", base_committed, commit_size); | |
5053 | #endif | |
5054 | /* Adjust the regions commit top */ | |
5055 | g_last->top_committed = (char *) base_committed + commit_size; | |
5056 | } | |
5057 | } | |
5058 | /* Adjust the regions allocation top */ | |
5059 | g_last->top_allocated = (char *) g_last->top_allocated + allocate_size; | |
5060 | result = (char *) g_last->top_allocated - size; | |
5061 | /* Deallocation requested? */ | |
5062 | } else if (size < 0) { | |
5063 | long deallocate_size = - size; | |
5064 | /* As long as we have a region to release */ | |
5065 | while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) { | |
5066 | /* Get the size to release */ | |
5067 | long release_size = g_last->reserve_size; | |
5068 | /* Get the base address */ | |
5069 | void *base_reserved = (char *) g_last->top_reserved - release_size; | |
5070 | /* Assert preconditions */ | |
5071 | assert ((unsigned) base_reserved % g_regionsize == 0); | |
5072 | assert (0 < release_size && release_size % g_regionsize == 0); { | |
5073 | /* Release this */ | |
5074 | int rc = VirtualFree (base_reserved, 0, | |
5075 | MEM_RELEASE); | |
5076 | /* Check returned code for consistency */ | |
5077 | if (! rc) | |
5078 | goto sbrk_exit; | |
5079 | #ifdef TRACE | |
5080 | printf ("Release %p %ld\n", base_reserved, release_size); | |
5081 | #endif | |
5082 | } | |
5083 | /* Adjust deallocation size */ | |
5084 | deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved; | |
5085 | /* Remove the old region from the list */ | |
5086 | if (! region_list_remove (&g_last)) | |
5087 | goto sbrk_exit; | |
5088 | } { | |
5089 | /* Compute the size to decommit */ | |
5090 | long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size); | |
5091 | if (to_decommit >= g_my_pagesize) { | |
5092 | /* Compute the size to decommit */ | |
5093 | long decommit_size = FLOOR (to_decommit, g_my_pagesize); | |
5094 | /* Compute the base address */ | |
5095 | void *base_committed = (char *) g_last->top_committed - decommit_size; | |
5096 | /* Assert preconditions */ | |
5097 | assert ((unsigned) base_committed % g_pagesize == 0); | |
5098 | assert (0 < decommit_size && decommit_size % g_pagesize == 0); { | |
5099 | /* Decommit this */ | |
5100 | int rc = VirtualFree ((char *) base_committed, decommit_size, | |
5101 | MEM_DECOMMIT); | |
5102 | /* Check returned code for consistency */ | |
5103 | if (! rc) | |
5104 | goto sbrk_exit; | |
5105 | #ifdef TRACE | |
5106 | printf ("Decommit %p %ld\n", base_committed, decommit_size); | |
5107 | #endif | |
5108 | } | |
5109 | /* Adjust deallocation size and regions commit and allocate top */ | |
5110 | deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed; | |
5111 | g_last->top_committed = base_committed; | |
5112 | g_last->top_allocated = base_committed; | |
5113 | } | |
5114 | } | |
5115 | /* Adjust regions allocate top */ | |
5116 | g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size; | |
5117 | /* Check for underflow */ | |
5118 | if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated || | |
5119 | g_last->top_allocated > g_last->top_committed) { | |
5120 | /* Adjust regions allocate top */ | |
5121 | g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size; | |
5122 | goto sbrk_exit; | |
5123 | } | |
5124 | result = g_last->top_allocated; | |
5125 | } | |
5126 | /* Assert invariants */ | |
5127 | assert (g_last); | |
5128 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated && | |
5129 | g_last->top_allocated <= g_last->top_committed); | |
5130 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed && | |
5131 | g_last->top_committed <= g_last->top_reserved && | |
5132 | (unsigned) g_last->top_committed % g_pagesize == 0); | |
5133 | assert ((unsigned) g_last->top_reserved % g_regionsize == 0); | |
5134 | assert ((unsigned) g_last->reserve_size % g_regionsize == 0); | |
5135 | ||
5136 | sbrk_exit: | |
5137 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) | |
5138 | /* Release spin lock */ | |
5139 | slrelease (&g_sl); | |
5140 | #endif | |
5141 | return result; | |
5142 | } | |
5143 | ||
5144 | #if HAVE_MMAP | |
5145 | /* mmap for windows */ | |
5146 | static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) { | |
5147 | static long g_pagesize; | |
5148 | static long g_regionsize; | |
5149 | #ifdef TRACE | |
5150 | printf ("mmap %ld\n", size); | |
5151 | #endif | |
5152 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) | |
5153 | /* Wait for spin lock */ | |
5154 | slwait (&g_sl); | |
5155 | #endif | |
5156 | /* First time initialization */ | |
5157 | if (! g_pagesize) | |
5158 | g_pagesize = getpagesize (); | |
5159 | if (! g_regionsize) | |
5160 | g_regionsize = getregionsize (); | |
5161 | /* Assert preconditions */ | |
5162 | assert ((unsigned) ptr % g_regionsize == 0); | |
5163 | assert (size % g_pagesize == 0); | |
5164 | /* Allocate this */ | |
5165 | ptr = VirtualAlloc (ptr, size, | |
5166 | MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE); | |
5167 | if (! ptr) { | |
5168 | ptr = (void *) MORECORE_FAILURE; | |
5169 | goto mmap_exit; | |
5170 | } | |
5171 | /* Assert postconditions */ | |
5172 | assert ((unsigned) ptr % g_regionsize == 0); | |
5173 | #ifdef TRACE | |
5174 | printf ("Commit %p %ld\n", ptr, size); | |
5175 | #endif | |
5176 | mmap_exit: | |
5177 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) | |
5178 | /* Release spin lock */ | |
5179 | slrelease (&g_sl); | |
5180 | #endif | |
5181 | return ptr; | |
5182 | } | |
5183 | ||
5184 | /* munmap for windows */ | |
5185 | static long munmap (void *ptr, long size) { | |
5186 | static long g_pagesize; | |
5187 | static long g_regionsize; | |
5188 | int rc = MUNMAP_FAILURE; | |
5189 | #ifdef TRACE | |
5190 | printf ("munmap %p %ld\n", ptr, size); | |
5191 | #endif | |
5192 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) | |
5193 | /* Wait for spin lock */ | |
5194 | slwait (&g_sl); | |
5195 | #endif | |
5196 | /* First time initialization */ | |
5197 | if (! g_pagesize) | |
5198 | g_pagesize = getpagesize (); | |
5199 | if (! g_regionsize) | |
5200 | g_regionsize = getregionsize (); | |
5201 | /* Assert preconditions */ | |
5202 | assert ((unsigned) ptr % g_regionsize == 0); | |
5203 | assert (size % g_pagesize == 0); | |
5204 | /* Free this */ | |
5205 | if (! VirtualFree (ptr, 0, | |
5206 | MEM_RELEASE)) | |
5207 | goto munmap_exit; | |
5208 | rc = 0; | |
5209 | #ifdef TRACE | |
5210 | printf ("Release %p %ld\n", ptr, size); | |
5211 | #endif | |
5212 | munmap_exit: | |
5213 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED) | |
5214 | /* Release spin lock */ | |
5215 | slrelease (&g_sl); | |
5216 | #endif | |
5217 | return rc; | |
5218 | } | |
5219 | ||
5220 | #endif | |
5221 | ||
5222 | static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed) { | |
5223 | MEMORY_BASIC_INFORMATION memory_info; | |
5224 | memory_info.BaseAddress = 0; | |
5225 | *free = *reserved = *committed = 0; | |
5226 | while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) { | |
5227 | switch (memory_info.State) { | |
5228 | case MEM_FREE: | |
5229 | *free += memory_info.RegionSize; | |
5230 | break; | |
5231 | case MEM_RESERVE: | |
5232 | *reserved += memory_info.RegionSize; | |
5233 | break; | |
5234 | case MEM_COMMIT: | |
5235 | *committed += memory_info.RegionSize; | |
5236 | break; | |
5237 | } | |
5238 | memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize; | |
5239 | } | |
5240 | } | |
5241 | ||
5242 | static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user) { | |
5243 | if (whole) { | |
5244 | __int64 creation64, exit64, kernel64, user64; | |
5245 | int rc = GetProcessTimes (GetCurrentProcess (), | |
5246 | (FILETIME *) &creation64, | |
5247 | (FILETIME *) &exit64, | |
5248 | (FILETIME *) &kernel64, | |
5249 | (FILETIME *) &user64); | |
5250 | if (! rc) { | |
5251 | *kernel = 0; | |
5252 | *user = 0; | |
5253 | return FALSE; | |
5254 | } | |
5255 | *kernel = (unsigned long) (kernel64 / 10000); | |
5256 | *user = (unsigned long) (user64 / 10000); | |
5257 | return TRUE; | |
5258 | } else { | |
5259 | __int64 creation64, exit64, kernel64, user64; | |
5260 | int rc = GetThreadTimes (GetCurrentThread (), | |
5261 | (FILETIME *) &creation64, | |
5262 | (FILETIME *) &exit64, | |
5263 | (FILETIME *) &kernel64, | |
5264 | (FILETIME *) &user64); | |
5265 | if (! rc) { | |
5266 | *kernel = 0; | |
5267 | *user = 0; | |
5268 | return FALSE; | |
5269 | } | |
5270 | *kernel = (unsigned long) (kernel64 / 10000); | |
5271 | *user = (unsigned long) (user64 / 10000); | |
5272 | return TRUE; | |
5273 | } | |
5274 | } | |
5275 | ||
5276 | #endif /* WIN32 */ | |
5277 | ||
5278 | /* ------------------------------------------------------------ | |
5279 | History: | |
5280 | ||
5281 | V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) | |
5282 | * Introduce independent_comalloc and independent_calloc. | |
5283 | Thanks to Michael Pachos for motivation and help. | |
5284 | * Make optional .h file available | |
5285 | * Allow > 2GB requests on 32bit systems. | |
5286 | * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>. | |
5287 | Thanks also to Andreas Mueller <a.mueller at paradatec.de>, | |
5288 | and Anonymous. | |
5289 | * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for | |
5290 | helping test this.) | |
5291 | * memalign: check alignment arg | |
5292 | * realloc: don't try to shift chunks backwards, since this | |
5293 | leads to more fragmentation in some programs and doesn't | |
5294 | seem to help in any others. | |
5295 | * Collect all cases in malloc requiring system memory into sYSMALLOc | |
5296 | * Use mmap as backup to sbrk | |
5297 | * Place all internal state in malloc_state | |
5298 | * Introduce fastbins (although similar to 2.5.1) | |
5299 | * Many minor tunings and cosmetic improvements | |
5300 | * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK | |
5301 | * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS | |
5302 | Thanks to Tony E. Bennett <tbennett@nvidia.com> and others. | |
5303 | * Include errno.h to support default failure action. | |
5304 | ||
5305 | V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) | |
5306 | * return null for negative arguments | |
5307 | * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com> | |
5308 | * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' | |
5309 | (e.g. WIN32 platforms) | |
5310 | * Cleanup header file inclusion for WIN32 platforms | |
5311 | * Cleanup code to avoid Microsoft Visual C++ compiler complaints | |
5312 | * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing | |
5313 | memory allocation routines | |
5314 | * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) | |
5315 | * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to | |
5316 | usage of 'assert' in non-WIN32 code | |
5317 | * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to | |
5318 | avoid infinite loop | |
5319 | * Always call 'fREe()' rather than 'free()' | |
5320 | ||
5321 | V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) | |
5322 | * Fixed ordering problem with boundary-stamping | |
5323 | ||
5324 | V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) | |
5325 | * Added pvalloc, as recommended by H.J. Liu | |
5326 | * Added 64bit pointer support mainly from Wolfram Gloger | |
5327 | * Added anonymously donated WIN32 sbrk emulation | |
5328 | * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen | |
5329 | * malloc_extend_top: fix mask error that caused wastage after | |
5330 | foreign sbrks | |
5331 | * Add linux mremap support code from HJ Liu | |
5332 | ||
5333 | V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) | |
5334 | * Integrated most documentation with the code. | |
5335 | * Add support for mmap, with help from | |
5336 | Wolfram Gloger (Gloger@lrz.uni-muenchen.de). | |
5337 | * Use last_remainder in more cases. | |
5338 | * Pack bins using idea from colin@nyx10.cs.du.edu | |
5339 | * Use ordered bins instead of best-fit threshhold | |
5340 | * Eliminate block-local decls to simplify tracing and debugging. | |
5341 | * Support another case of realloc via move into top | |
5342 | * Fix error occuring when initial sbrk_base not word-aligned. | |
5343 | * Rely on page size for units instead of SBRK_UNIT to | |
5344 | avoid surprises about sbrk alignment conventions. | |
5345 | * Add mallinfo, mallopt. Thanks to Raymond Nijssen | |
5346 | (raymond@es.ele.tue.nl) for the suggestion. | |
5347 | * Add `pad' argument to malloc_trim and top_pad mallopt parameter. | |
5348 | * More precautions for cases where other routines call sbrk, | |
5349 | courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). | |
5350 | * Added macros etc., allowing use in linux libc from | |
5351 | H.J. Lu (hjl@gnu.ai.mit.edu) | |
5352 | * Inverted this history list | |
5353 | ||
5354 | V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) | |
5355 | * Re-tuned and fixed to behave more nicely with V2.6.0 changes. | |
5356 | * Removed all preallocation code since under current scheme | |
5357 | the work required to undo bad preallocations exceeds | |
5358 | the work saved in good cases for most test programs. | |
5359 | * No longer use return list or unconsolidated bins since | |
5360 | no scheme using them consistently outperforms those that don't | |
5361 | given above changes. | |
5362 | * Use best fit for very large chunks to prevent some worst-cases. | |
5363 | * Added some support for debugging | |
5364 | ||
5365 | V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) | |
5366 | * Removed footers when chunks are in use. Thanks to | |
5367 | Paul Wilson (wilson@cs.texas.edu) for the suggestion. | |
5368 | ||
5369 | V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) | |
5370 | * Added malloc_trim, with help from Wolfram Gloger | |
5371 | (wmglo@Dent.MED.Uni-Muenchen.DE). | |
5372 | ||
5373 | V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) | |
5374 | ||
5375 | V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) | |
5376 | * realloc: try to expand in both directions | |
5377 | * malloc: swap order of clean-bin strategy; | |
5378 | * realloc: only conditionally expand backwards | |
5379 | * Try not to scavenge used bins | |
5380 | * Use bin counts as a guide to preallocation | |
5381 | * Occasionally bin return list chunks in first scan | |
5382 | * Add a few optimizations from colin@nyx10.cs.du.edu | |
5383 | ||
5384 | V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) | |
5385 | * faster bin computation & slightly different binning | |
5386 | * merged all consolidations to one part of malloc proper | |
5387 | (eliminating old malloc_find_space & malloc_clean_bin) | |
5388 | * Scan 2 returns chunks (not just 1) | |
5389 | * Propagate failure in realloc if malloc returns 0 | |
5390 | * Add stuff to allow compilation on non-ANSI compilers | |
5391 | from kpv@research.att.com | |
5392 | ||
5393 | V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) | |
5394 | * removed potential for odd address access in prev_chunk | |
5395 | * removed dependency on getpagesize.h | |
5396 | * misc cosmetics and a bit more internal documentation | |
5397 | * anticosmetics: mangled names in macros to evade debugger strangeness | |
5398 | * tested on sparc, hp-700, dec-mips, rs6000 | |
5399 | with gcc & native cc (hp, dec only) allowing | |
5400 | Detlefs & Zorn comparison study (in SIGPLAN Notices.) | |
5401 | ||
5402 | Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) | |
5403 | * Based loosely on libg++-1.2X malloc. (It retains some of the overall | |
5404 | structure of old version, but most details differ.) | |
5405 | ||
5406 | */ | |
5407 |