developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

sponsored links

This book is for C + + program performance optimization, in-depth C + + program performance optimization methods and examples. The book by the four papers formed the first one introduced C + + language, object model, the article is to optimize C + + program based; the first two aimed at how to optimize the C + + program memory usage; section 3 describes how to optimize the process of start-up performance; Section 4 describes three types of performance optimization tools, that is, memory analysis tools, performance analysis tools, and I / O detection tool, which is a measure of program performance tool.

This chapter first introduces custom memory pool performance optimization theory, and then list the commonly used software development of different types of memory pool, and gives concrete examples of implementation.

Back to top

6.1 Performance Optimization of Custom principle of memory pool

developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool
Title: "C + + application performance optimization"
On: Ping Honghua, Xu Ying, Cheng Yuan, Wang Lei, eds
Publisher: Electronic Industry Press
Publication Date: March 2007
ISBN: 978-7-121-03831-0
Buy: China Interactive Publishing Network , dearbook

Recommended section:

More recommended books, visit developerWorks books channel .

Welcome to the book valuable feedback. You can page through the bottom of the proposed scoring column for the paper, and feedback on your suggestions and comments.

DeveloperWorks channel if you have any good suggestions books, you are welcome to send us suggestions .

Mentioned earlier, the reader has learned that "heap" and "stack" different. In programming practice, will inevitably use a lot of heap memory. For example, the program maintains a linked list data structure, add or delete a linked list for each node, both need to release some memory allocated on the heap or memory; in the maintenance of a dynamic array, dynamic array if the size does not meet procedures when necessary, but also in the memory heap allocation of new memory space.

6.1.1 Default memory management functions of less than

Memory management functions using the default new / delete or malloc / free and free memory allocated on the heap will be some additional overhead.

System receives a certain size of the memory allocation request, first find the internal maintenance of memory free block table and need to some algorithm (such as distribution of the first find of not less than for the size of memory block to the requester, or distribution of the most suitable for the size of the memory block, or distribution of the largest free block of memory, etc.) to find the right size of free memory blocks. If the free memory block is too large, but also has a distribution of cut and smaller free block. Then the system updates the memory free block list, complete a memory allocation. Similarly, in the free memory, the system to re-release the memory block to free memory blocks in the table. If possible, can be combined with adjacent free blocks into larger free block.

The default memory management functions also take into account the multi-threaded applications need to allocate and free memory for each lock, the same increase in overhead.

Can be seen, if the application frequently allocate and free memory in the heap, will result in performance losses. And make the system a large number of memory fragmentation, reduced memory utilization.

The default memory allocation and release of naturally considered the performance of algorithms, but these generic versions of memory management algorithms to cope with more complex range of circumstances, need to do more additional work. For a specific application procedures for a particular memory allocation for the release of their own custom memory pool model can obtain better performance.

6.1.2 Definition and classification of memory pool

Custom memory pool of ideas by the "pool" characters show no doubt, the application can call through the system's memory allocation in advance for the appropriate size of the memory-time as a memory pool, then the application's own memory allocation and release, you can through this memory pool to complete. Only when the memory pool size of the dynamic expansion when needed, only need to call the system memory allocation function, the other time on the memory of all the operations are controlled by the application.

Application-defined memory pools for different application scenarios there are different types.

From the perspective of thread-safe points, the memory pool can be divided into single-threaded and multi-thread memory pool memory pool. Single-threaded memory pool throughout the life cycle is only used by a thread, which does not require exclusive access to the issues to consider; multi-thread memory pool may be shared by multiple threads, so you need to allocate and free memory for each lock. In contrast, higher performance single-threaded memory pool, and multi-thread memory pool for a wider range.

Can allocate memory from memory pool size of sub-units, can be divided into fixed and variable memory pool memory pool. The so-called fixed memory pool is an application out of time allocated from the memory pool size of the memory cells have been identified in advance, is fixed; and variable memory pool is allocated for each memory cell size can be changed on demand, application wider, while the performance is lower than the fixed memory pool.

6.1.3 Sample memory pool works

The following example shows a fixed memory pool memory pool works, shown in Figure 6-1.

Figure 6-1 Fixed memory pool
developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

Fixed memory pool by a series of fixed-size memory blocks, each memory block also contains a fixed number and size of the memory cell.

Shown in Figure 6-1, the memory pool contains a total of four memory blocks. First generation in memory pool, only apply to the system, a memory block, return the pointer to the memory pool as the head pointer. With the application of the memory after the continuous demand for dynamic expansion of the memory pool when the judge need only apply once again to a new block of memory system, and to all of these memory blocks through pointers linked. For the operating system, it has been allocated for the applications such as the size of 4 chunks of memory. Because it is a fixed size, so the faster distribution; and for the applications, opening up some of its memory pool size, memory pool is still a surplus of space inside.

For example, look at the first four zoom memory block, which contains part of the memory pool block header information, and 3 in the same size memory pool unit. Unit 1 and Unit 3 is idle, the unit 2 has been allocated. When the application needs to allocate a unit of the memory pool size of memory, simply loop through all the memory pool block header information and quickly locate the units that have free memory pool block. Then according to the positioning block block header information directly to an idle unit address, return to this address and you can mark the next free unit; when the application frees a memory pool of a unit, directly in the corresponding memory pool block header tag information for the free unit of the memory cell can be.

Shows that compared with the system management memory, the memory pool operation very quickly, it is optimization in the performance of the main advantages are as follows.

(1) for special circumstances, such as the need frequent release of fixed-size memory allocation of objects, the distribution algorithm does not require complex and multi-thread protection. Table does not need to maintain free memory overhead to get higher performance.

(2) Since the opening number of consecutive blocks of memory space as the memory pool, so to some extent to improve the program locality, improved application performance.

(3) easier to control the memory page boundary alignment and byte alignment, no memory fragmentation problems.

Back to top

6.2 Implementation example of a memory pool

This section of the application in a large-scale practical application to the realization of a memory pool, and explained in detail the use and working principle. This is a single-threaded environment and used fixed memory allocation unit size of the pool, generally used for the implementation dynamically creates and frequently may be repeated to create the class object or structure to allocate memory.

This section first explain the structure of the memory pool of data declarations and icon, and then describe the principles and behavior. Then one by one to explain implementation details, and finally how to apply this procedure in practice the memory pool, and with normal memory function using the application program memory performance for comparison.

6.2.1 Internal structure

Memory pool class MemoryPool statement as follows:

class MemoryPool
    MemoryBlock*   pBlock;
    USHORT          nUnitSize;
    USHORT          nInitSize;
    USHORT          nGrowSize;

                     MemoryPool( USHORT nUnitSize,
                                  USHORT nInitSize = 1024,
                                  USHORT nGrowSize = 256 );

    void*           Alloc();
    void            Free( void* p );

MemoryBlock for the memory pool attached to the real request to allocate memory for the memory of the memory block head structure, which describes the block of memory associated with the use of information:

struct MemoryBlock
    USHORT          nSize;
    USHORT          nFree;
    USHORT          nFirst;
    USHORT          nDummyAlign1;
    MemoryBlock*  pNext;
    char            aData[1];

        static void* operator new(size_t, USHORT nTypes, USHORT nUnitSize)
                return ::operator new(sizeof(MemoryBlock) + nTypes * nUnitSize);
        static void  operator delete(void *p, size_t)
                ::operator delete (p);

        MemoryBlock (USHORT nTypes = 1, USHORT nUnitSize = 0);
        ~MemoryBlock() {}

This memory pool data structure shown in Figure 6-2.

Figure 6-2, the data structure of memory pool
developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

6.2.2 Overall system

The overall mechanism of this memory pool is as follows.

(A) during operation, MemoryPool memory pool may have multiple applications to meet the request of the memory blocks of memory, the memory block is opened from the process heap a large continuous area of memory, which consists of a MemoryBlock structure and a number of memory modules available for distribution, all memory blocks of a memory block list, MemoryPool the pBlock is the head of this list. For each memory block, to pass through the head structure of the pNext member MemoryBlock visit that followed behind in their memory blocks.

(2) Each memory block consists of two parts, namely a MemoryBlock structure and a number of memory allocation units. The memory allocation unit size is fixed (by the MemoryPool of nUnitSize said), MemoryBlock structure does not maintain that the information unit has been assigned; the contrary, not only to maintain the distribution of free allocation units of information. It has two members of the more important: nFree and nFirst. nFree record how much of this memory block in the allocation of free units, but nFirst is available for distribution to record a unit number. Free distribution of each unit of the first two bytes (that is, a USHORT type value) recorded after keeping it under a free allocation unit number, so that the free distribution of each unit by using the first two bytes, a MemoryBlock all of the free allocation of units to be linked together.

(3) When a new memory request arrives, MemoryPool will traverse through pBlock MemoryBlock list until you find a MemoryBlock where the memory block, among them a free allocation unit (by testing MemoryBlock structure of nFree member is greater than 0) . If you find such a memory block, to obtain their MemoryBlock of nFirst value (this for the memory block in the first one available for distribution free unit number). Then according to this number to locate the starting position of free allocation units (because all the allocation unit size is fixed, so the starting position of each allocation unit number can be assigned by unit size to offset positioning), this position is to meet The memory for the starting address of the memory request. But back to this address before, need to first start the position of the first two bytes of value (the value of the two bytes after the next recorded a free allocation unit number) assigned to the memory block MemoryBlock the nFirst members . This next request will use this number corresponds to the memory cell to meet the same time, this memory block MemoryBlock the nFree decrease 1, and then will just navigate to the starting position of the memory unit as the return address of the memory request returned to the caller.

(4) If the memory block from an existing memory allocation can not find a free unit (when the 1st request for memory, as well as all existing memory block of memory allocated to all units have been assigned this occurs when case), MemoryPool will heap from the process to apply for a block of memory (the memory block includes a MemoryBlock structure, and close to many subsequent memory allocation units, assuming that the number of memory allocation units for the n, n be values MemoryPool The nInitSize or nGrowSize), application finished, and will not immediately allocate one allocation unit out, but must first initialize the memory block. Initialization operation, including setting MemoryBlock the nSize for all memory allocation unit size (note that does not include the size of MemoryBlock structure), nFree to n-1 (note that this is the n-1 instead of n, because the The new memory block is a new memory in order to meet the request of the application, it will immediately allocate a free storage unit out, if set to n-1, after the distribution of a free storage unit no longer decreasing n 1), nFirst 1 ( already know nFirst can be assigned the next free memory cell number. nFree 1 reason and the same for the n-1, which will be numbered 0 to immediately free allocation units distributed. now set to 1, then do not modify nFirst value), MemoryBlock the structure need to do more important things, about the distribution of elements numbered 0 after link up all the free allocation units. As mentioned earlier, every free distribution of units under the first two bytes used to store a free allocation unit number. In addition, because each allocation unit size is fixed, so can their number and cell size (MemoryPool of nUnitSize members) as the offset value of the product positioning. The only question now is to locate the address from which to start? The answer is MemoryBlock of aData [1] members of the start. Because aData [1] is in fact a MemoryBlock structure of (MemoryBlock structure of the last byte), so in essence, MemoryBlock structure of the last byte is also used for the allocation of units to be allocated out of a part of. Because the entire memory block from the MemoryBlock structure and composition of an integer number of allocation units, which means that the last byte memory block would be wasted in Figure 6-2 of this byte of memory used in the last part of the two thick black background the small logo. Determine the distribution of units starting position, will link up with the free distribution of the work unit very easy. AData position from the beginning, every nUnitSize size whichever first two bytes, recorded after the free allocation unit number. Since the beginning of all allocation units are free, so this number is to increase its number 1 position on that is closely followed by the unit number. After initialization, this block of memory the first one to return the starting address allocation unit has been aware of this address is aData.

(5) When a unit is assigned because the delete needs recovery, the unit will not return to the process heap, but returned to the MemoryPool. Return, MemoryPool to know the starting address of the unit. At this time, MemoryPool began traversing its linked list to maintain the memory block to determine whether the starting address of the unit to a memory block address falls within the scope. If not all the memory address range, then this is not part of the recovery unit MemoryPool; if a memory block address range, then it will just recall that the distribution of units added to the memory block maintained by MemoryBlock free distribution list of the head unit, while increasing the value of its nFree 1. After recovery, taking into account the efficient use of resources and follow-up operation of the performance, memory pool operation will continue Panduan: If all of this memory block allocation units are free, then the memory block will be removed from the MemoryPool in and as a the overall return to the process heap; if the memory block are non-free allocation units, then this block of memory can not be returned to the process heap. But because there is a distribution unit had just returned to this block of memory, that is, the memory block allocation units are free for the next distribution, so it will be moved to MemoryPool maintenance of memory blocks in the head. That the arrival of the next memory request, MemoryPool traverse the list to find the free memory block allocation unit, the 1st look will find the memory block. Because it does have the freedom to allocate memory block unit, thus reducing the number of MemoryPool traversal.

In summary, each memory pool (MemoryPool) maintain a memory block list (single linked list), each memory block from a block of memory to maintain the information block header structure (MemoryBlock) and multiple distribution modules, block header structure is MemoryBlock A further safeguard the memory block allocation unit composed of all free "list." This list is not by "pointing to a free allocation of units under the pointer" link up, but by "a free allocation of units under the code" link up, the number value is stored in the free allocation of units in the first two bytes. In addition, the first a free allocation unit of the starting position is not MemoryBlock structure "behind" the first an address location, but MemoryBlock structure "inside" the last byte aData (may not be the last one, considering to the byte alignment problems), that is, in fact Wang Qianmian wrong allocation units of a. And because MemoryBlock structure just behind the space allocation unit is an integer multiple of, so in turn dislocation continues, the last byte memory block is not actually being used. One reason for doing so is transplanted into account the different platforms, because the alignment of different platforms may vary. That is, when the size of memory for MemoryBlock may return more than the sum of all its members but also the size of a large number of memory. The last few bytes to "padded" and make aData be the first one allocation unit of the starting position, so that the alignment of a variety of different platforms can work.

6.2.3 details of the analysis

With the overall impression of the above, the present section to a detailed analysis of its implementation details.

(1) MemoryPool the structure is as follows:

MemoryPool::MemoryPool( USHORT _nUnitSize,
                            USHORT _nInitSize, USHORT _nGrowSize )
    pBlock      = NULL;             ①
    nInitSize   = _nInitSize;       ②
    nGrowSize   = _nGrowSize;       ③

    if ( _nUnitSize > 4 )
        nUnitSize = (_nUnitSize + (MEMPOOL_ALIGNMENT-1)) & ~(MEMPOOL_ALIGNMENT-1); ④
    else if ( _nUnitSize <= 2 )
        nUnitSize = 2;              ⑤
        nUnitSize = 4;

As can be seen from ① Department, MemoryPool created, and did not immediately create a real application of memory used to satisfy the memory block, that is the beginning of memory block list is empty.

② ③ Office Department and were set to "1 second to create the memory block contains the number of allocation units", and "then create the memory block contains the number of distribution units," which created the two values MemoryPool through the parameters specified, then the MemoryPool object life cycle has not changed.

The code behind to set nUnitSize, this value refer to the incoming _nUnitSize parameters. However, two factors need to be considered. As mentioned earlier, each allocation unit in the free state, the first two bytes used to store "a free allocation unit its next number." That each allocation unit "at least" there are "two byte" This is the reason ⑤ Department assignment. ④ Department is larger than 4 bytes in size _nUnitSize up "rounded to" larger than the smallest MEMPOOL_ ALIGNMENT _nUnitSize multiple (provided MEMPOOL_ALIGNMENT as a multiple of 2). If _nUnitSize to 11, MEMPOOL_ALIGNMENT to 8, nUnitSize to 16; MEMPOOL_ALIGNMENT to 4, nUnitSize to 12; MEMPOOL_ALIGNMENT to 2, nUnitSize 12, and so on.

(2) When the memory request made to the MemoryPool:

void* MemoryPool::Alloc()
    if ( !pBlock )           ①

    MemoryBlock* pMyBlock = pBlock;
    while (pMyBlock && !pMyBlock->nFree )②
        pMyBlock = pMyBlock->pNext;

    if ( pMyBlock )              ③
        char* pFree = pMyBlock->aData+(pMyBlock->nFirst*nUnitSize);                       
        pMyBlock->nFirst = *((USHORT*)pFree);
        return (void*)pFree;
    else                    ④
        if ( !nGrowSize )
            return NULL;

                pMyBlock = new(nGrowSize, nUnitSize) FixedMemBlock(nGrowSize, nUnitSize);
        if ( !pMyBlock )
            return NULL;

        pMyBlock->pNext = pBlock;
        pBlock = pMyBlock;

        return (void*)(pMyBlock->aData);


MemoryPool steps to meet the request of the main memory consists of four steps.

① Department first to check the current memory block memory pool list is empty, if empty, it means that this is the 1st memory for the request. Then, from the process heap allocation unit for a number of memory blocks for the nInitSize, and initialize the memory block (the main initialization MemoryBlock structure members, and free allocation units to create the initial list, following a detailed analysis will the code). If the memory block for success, and initialization is completed, return the first one allocation unit to the calling function. 1 allocation units to MemoryBlock structure of the body as the last byte of start address.

② Department's role is when the memory pool has memory blocks (ie memory block list is not empty), the traversal of the memory block list, find there are "free allocation units" of the memory block.

③ If you find there to check the free memory block allocation unit, then the "positioning" to the memory block can now be used for free distribution unit office. "Positioning" to MemoryBlock structure of the body position aData last byte starting position to MemoryPool the nUnitSize for the step to carry out. Found, the need to modify the MemoryBlock of nFree information (the remaining free allocation units less than the original one), and the freedom to modify this memory block storage unit list information. The memory block is found, pMyBlock-> nFirst the free memory block storage unit in the list header, under which a free storage unit number stored in pMyBlock-> nFirst instructions free storage unit (that is, just navigate to the free memory cells) of the first two bytes. By just navigate to the location, whichever is the first two bytes of the value assigned to pMyBlock-> nFirst, this is the free memory block storage unit list of the new header, that is, the freedom to go out the next allocation unit allocation number (if greater than zero if nFree). Modify the maintenance message, you can just navigate to the address of the free allocation of units returned to the application calls the function. Note that because the allocation unit has been assigned, and no maintenance has been allocated memory block allocation unit, so the distribution of units in the first two bytes of information are no longer useful. From another perspective, this free allocation units to return to the calling function, call the function what to do with this memory, the memory pool is unknown, they are not known. The allocation unit in the return to the calling function, call function for its contents is meaningless. Therefore almost certainly call the function in memory with this unit will cover the original content, that is, the contents of the first two bytes will be erased. Therefore, each memory cell is not because of the need to maintain links and the introduction of redundant information, but direct use of units within the first two bytes, when its distribution, the first two bytes can be called function use. In the free state, is used to store maintenance information, that a free allocation of units under the code, this is a good example of effective use of memory.

② ④ Department said the Department traversal, there is no free allocation units to find there the memory block, then, need to re-apply to the process of a memory block of the heap. Because it is not the first time for the memory block, so for the memory block that contains the number of allocation units for the nGrowSize, instead of nInitSize. ① The same with the first launch of this new application block of memory initialization, and then insert this memory block of memory block MemoryPool the head of the list, then this memory block of the first one allocation unit returned to the calling function. This new block of memory into the memory block the head of the list because there are many of the memory block allocation units available for distribution free (unless nGrowSize equal to 1, this should be unlikely. Because the meaning is the one-time memory pool from the process for a large heap memory for multiple applications for the follow-up), on the head can make before the next memory applications, reducing the memory block ② Department traversal time.

Can use Figure 6-2 to MemoryPool to show MemoryPool:: Alloc process. Figure 6-3 is the internal state at some point MemoryPool.

Figure 6-3 The internal state at some point MemoryPool
developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

Because MemoryPool memory block list is not empty, it will traverse the linked list of memory blocks. And because the first one there are free memory block allocation unit, so the first one from memory block allocation. Check nFirst, the value m, then pBlock-> aData + (pBlock-> nFirst * nUnitSize) navigate to the number of free allocation units to m starting position (with pFree that). Before returning to pFree need to modify the memory block maintenance information. First nFree decrease 1, and then get pFree Department started the first two bytes of value (must be explained that, where aData Department value k. Is not this a byte. But in aData and closely followed by the addition together form a byte value of a USHORT not mistaken). Found to be k, then modify the pBlock the nFirst as k. Then, return to pFree. The structure at this time MemoryPool shown in Figure 6-4.

Figure 6-4 MemoryPool structure
developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

Can see the first one for the original allocation of units (m ID Office) has been shown to be allocated for the state. The pBlock the nFirst already point to the original m unit under a free allocation unit number, or k.

(3) MemoryPool recovered memory:

void MemoryPool::Free( void* pFree )

    MemoryBlock* pMyBlock = pBlock;

    while ( ((ULONG)pMyBlock->aData > (ULONG)pFree) ||
         ((ULONG)pFree >= ((ULONG)pMyBlock->aData + pMyBlock->nSize)) )①

    pMyBlock->nFree++;                     ②
    *((USHORT*)pFree) = pMyBlock->nFirst;  ③
    pMyBlock->nFirst = (USHORT)(((ULONG)pFree-(ULONG)(pBlock->aData)) / nUnitSize);④

    if (pMyBlock->nFree*nUnitSize == pMyBlock->nSize )⑤

As mentioned earlier, recovery distribution unit, the memory block may be returned to the process heap may also be recovered memory block allocation units moved to their memory pool of memory blocks the head of the list. Both of these operations need to modify the list structure. Then you need to know that the memory block in the previous position in the list of memory blocks.

① Department traverse linked list of memory pools of memory blocks to determine the allocation of units to be recovered (pFree) which falls within the target memory block, determined by comparing the pointer value.

Run to ② Department, pMyBlock is found to be containing pFree points to recover the memory block allocation unit (of course, time should also need to check the case when pMyBlock to NULL, that pFree not part of this memory pool, and thus it can not return to this memory pool, and readers can add their own). At that point, pMyBlock the nFree incremented by 1, that the free distribution of this memory block unit one more.

③ Department to modify the memory block of free allocation units on the list of information, it will be recovered in this allocation unit value of the first two bytes of the original point of the first block of memory can be allocated for free distribution of cell numbers.

④ Department will pMyBlock the nFirst value to be changed to point to the distribution of cell numbers recovered, the number by calculating the starting position of this unit location relative pMyBlock of aData difference, and then divided by the step (nUnitSize) be.

In fact, ③ and ④ the role that this two-step allocation units to be recovered "real recovery." It is worth noting that this step is actually making the recovery unit to become the next block of memory can be allocated free allocation units, about it on the list of free allocation units in the head. Note that the memory address has not changed. In fact, a memory address allocation unit both in the distribution of after, or in a free state, has not changed. Changes only the state (allocated / free), and when it is in the free state of free allocation units in the location of the list.

⑤ check the When recovery has been completed, including the recovery unit of the memory block if all the units are in a free state, and this memory is at the head of the list of memory blocks. If so, this memory block is returned to process the whole heap, and modify the memory block list structure.

Note that a memory block in determining whether all units are in a free state, and did not traverse all of its units, but the judge nFree multiplied by nUnitSize is equal nSize. nSize is the memory block allocation unit size for all, but not the size of the head MemoryBlock structure. Here you can see the intention, that is, a memory block is used to quickly check all the allocation units are all in a free state. Because the only combination of nFree and nUnitSize calculated concluded without traverse and calculate the distribution of all the free state the number of units.

Another note is needed here and can not compare nFree and nInitSize or nGrowSize to determine the size of a block of memory cells are all assigned a free state, because the 1st block of memory allocated (allocation unit number for the nInitSize ) may be moved to the back of the list, the list may even move back in after a certain time because of all of its units are in a free state was the return to the process heap. That the recovery distribution unit, a memory block can not determine the distribution of cell number in the end is nInitSize or nGrowSize, also can not compare with nInitSize or nGrowSize nFree to determine the size of a memory block for all allocation units are all for free state.

To the memory pool after the distribution above the state as an example, assume that the first two then the last block of memory modules need to be recycled (has been assigned, assumed the number is m, pFree pointer to it), as shown in Figure 6-5 .

Not difficult to find, this time from the original value nFirst 0 to m. That this memory block is allocated under a unit is m number of units, rather than the 0 number of units (the first distribution of the latest recovery unit, from that point of view, the principle of the process and the stack is similar to that advanced out after . but here's "progress" means "recovery" and "out" means "distribution"). Accordingly, m the "next free cell" mark to 0, that is, the original memory block "the next will be assigned out of the unit", which suggests that the distribution of the recent recovery of the memory block unit is plugged into the "free allocation units on the list "the head. Of course, nFree incremented by 1.

Figure 6-5 after the distribution of memory pool state
developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

Before treatment to ⑥ Department, the state shown in Figure 6-6.

Figure 6-6 ⑥ Office address to the state prior to the memory pool
developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

It should be noted that, although pFree be "recycled", but pFree still point to m number of units, this element in the recovery process, the first two bytes are overwritten, but the other part of the content has not changed. And the whole process memory usage point of view, the m number of unit status is still "valid." Because there's "recovery" is recycled to the memory pool, and are not recycled to the process heap, the program can still pFree access to this Module. But this is a very dangerous operation, because the first element in the recovery process of the first two bytes have been overwritten, and the unit may soon be re-allocated memory pool. Therefore recovered through pFree means for access to the unit is wrong, read will read the wrong data, write the program may damage other parts of the data, so need to be careful.

Then need to determine the internal use of the memory block, and their memory blocks in the location of the list. If the memory block in an ellipsis "... ..." would indicate there are other parts of the unit was assigned, multiplied by the nUnitSize does not mean that nFree nSize. Because this memory block is not in the list header, so the list need to move the head, as shown in Figure 6-7.

Figure 6-7 MemoryBlock caused by recycling mobile
developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

If the memory block in an ellipsis "... ..." for the rest of the free allocation of units in all, that is multiplied by the nUnitSize nFree equal nSize. Because this memory block is not in the list header, so in this case need to recall this memory block to process the entire stack, the structure of the recovered memory pool as shown in Figure 6-8.

Figure 6-8 the structure of recovered memory pool
developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

A memory block will be initialized in the application, mainly to establish the initial free allocation units list, the following is the detailed code:

MemoryBlock::MemoryBlock (USHORT nTypes, USHORT nUnitSize)
        : nSize  (nTypes * nUnitSize),
          nFree  (nTypes - 1),                     ④
          nFirst (1),                              ⑤
          pNext  (0)
                char * pData = aData;                  ①
                for (USHORT i = 1; i < nTypes; i++) ②
                        *reinterpret_cast<USHORT*>(pData) = i; ③
                        pData += nUnitSize;

Here you can see, ① Department pData the initial value is aData, the 0 number unit. But ② Department of cycle i is from a start, and then inside the loop of ③ Department will pData the first two bytes of the value set to i. 0 # unit that the first two bytes of a value of 1,1 unit No. 2 of the first two bytes of value, until (nTypes-2) cell number in the first two bytes of value (nTypes-1). This means that the initial block of memory, its free distribution of unit linked list starting from the No. 0. Series in turn until the last 2 units point to the last unit.

Also be noted that, in its initialization list, nFree initialized nTypes-1 (rather than nTypes), nFirst initialized to 1 (not 0). This is because the first 1 unit, that unit number 0 after construction, will be allocated immediately. Another noted that the last unit does not set the initial value of the first two bytes, because the initial cell in the memory block is not under a free allocation unit. But you can see from the example above, when the last unit to be allocated and recovered, the first two bytes will be set.

Figure 6-9 shows a block of memory initialization state.

Figure 6-9 after a block of memory initialization state
developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

Destructor when the memory pool, the need to pool all the memory blocks of memory returned to the process heap:

    MemoryBlock* pMyBlock = pBlock;
    while ( pMyBlock )

6.2.4 Use

Analysis of the memory pool after the internal principle, this section describes how to use it. From the above analysis, we can see that the memory pool has two main functions of external interfaces that Alloc and Free. Alloc returned to the application of the allocation unit (fixed-size memory), Free incoming pointer is recovered on behalf of the memory allocation unit to the memory pool. Distribution of information is specified by the constructor MemoryPool, including the allocation unit size, memory pool for the 1st block of memory contained in the distribution of the number of units, as well as follow-up to the application memory pool allocated memory block contains the number of such units .

In summary, when the need to improve the application of certain key class object / collection efficiency, consider the class of objects all generate the necessary space for such a memory from a pool open. In the destruction of objects, only to return to the memory pool. "A class of all objects allocated in the same memory pool object" is a natural demand for such designs is to declare a static memory pool class objects, and to let all objects open up the memory from this memory pool instead of the default process heap from the acquisition, the need for a new kind of overloaded operator. As appropriate, the recovery is for the memory pool, rather than the process default heap, also need to override a delete operator. In the new operator function using the memory pool Alloc memory to satisfy all requests for such objects, and destruction of an object you can call the delete operator to complete Free memory pool.

6.2.5 Performance Comparison

To test the effects after the use of the memory pool by a small test program using the memory pool mechanism can be found after the time-consuming for the 297 ms. But did not use the memory pool mechanism is time-consuming and 625 ms, the speed increased 52.48%. Speed can be attributed to several reasons, one, in addition to occasional memory applications and destruction will lead to the destruction from the process heap allocation and memory block, the vast majority of applications and the destruction of the memory by the memory pool has already applied to the memory block, but have no direct dealings with the process heap, deal directly with the process heap is a very time-consuming operation; second, which is single-threaded environment, memory pool, you can see the Alloc and Free memory pool operation and protection measures do not increase the thread. So if class A uses the memory pool, all the class A object creation and destruction must occur in the same thread. However, if the class A memory pool is used, class B is also used in the memory pool, then use the thread class A and class B can not use the thread is the same thread.

In addition, in Chapter 1 have already been discussed, because memory pool technology allows the same type of object distribution in the adjacent area of memory, and the procedure is often the same type of object traversing operation. Therefore, the program took place during the operation of the corresponding missing pages should be less, but this generally only in a real environment to verify complex applications.

Back to top


Memory for application and release a great impact on the overall performance, even in many cases an application to become a bottleneck. Remove the memory bottleneck caused by the application and release method is often used for the actual situation of the memory to provide a suitable memory pool. Memory pool has been able to improve performance, mainly because it can use the application's real memory usage in some of the scenes "feature." For example, with the release of some of the memory must have occurred in a thread, a certain type of object formation and destruction and in the application to other types of objects much more frequently, and so on. These features, you can use these special memory to provide tailor-made scene memory pool. This can eliminate the default system memory mechanism, the practical application scenario for the unnecessary operation, so as to enhance the application's overall performance.

Reader feedback

Welcome to the book valuable feedback. You can page through the bottom of the proposed scoring column for the paper, and feedback on your suggestions and comments.

DeveloperWorks channel if you have any good suggestions books, you are welcome to send us suggestions .

Reference material


Ping Honghua, Tsinghua University, Master of Computer Science and Technology. IBM China Development Senior Software Engineer. In December 2003 joined the IBM China Development Lab, IBM is mainly engaged in product development, performance optimization and so on. Interests include C / C + + application performance tuning, Windows application development, Web application development.

Xu Ying, Shandong University, Master of Computer Science and Technology. In April 2003 joined the IBM China Development Center, is currently development manager at IBM China Development Lab, IBM software products has been engaged in multiple operating system platforms in development. Participated in IBM products in Windows and Linux platforms, performance optimization work on the C / C + + programming language and cross-platform development of large software systems have more experience.

Cheng Yuan, Peking University, Master of Computer Science and Technology. IBM China Development Senior Software Engineer. In 2003 joined the IBM China Development Lab, IBM Productivity Tools is mainly engaged in product development, performance optimization and so on. Interests include C / C + + programming language, software performance engineering, Windows / Linux platform performance testing optimization tools.

Wang Lei, Beijing University of Aeronautics and Astronautics Department of Computer Science and Technology Master, currently IBM senior software development center in China's software engineers. From December 2002 joined the IBM China Development Center has been working to improve the production efficiency of application software development. Interests include the C \ C + + application performance tuning, Java application performance tuning.

  • StumbleUpon
  • Digg
  • TwitThis
  • Mixx
  • Technorati
  • Facebook
  • NewsVine
  • Reddit
  • Google
  • LinkedIn
  • YahooMyWeb

Related Posts of developerWorks Book Channel: C + + application performance optimization, Chapter 6: Memory pool

  • Java Performance Optimization Tips Collection

    =================================== Abstract: ============= ================== Procedures for use of resources (memory, CPU time, network bandwidth, etc.) are limited, the purpose of optimization is to allow the procedure to use as little as possible reso

  • Learn Ajax, JSP, Servlet, Web connection, database, JAVA, J2EE of good information and good video

    The following is a list of books: (but some will have to scan them book, support this website) Lying design patterns JQuery basic tutorial JSP.2.0 technical manuals (high-definition full version) Authoritative guide to SOA

  • Hibernate Performance Optimization

    Hibernate is a lightweight JDBC package, so in many cases the performance of Hibernate than direct access to the database using JDBC lower. However, through the correct methods and strategies, the use of Hibernate at the time can still be very close to th

  • javascript performance optimization

    Would like to sum up a long time about javascript performance optimization some things, usually also have the attention of the collection of information in this regard. put in the collection of random things turned out, only surprised to find

  • JavaScript trap

    This was originally Translation of Estelle Weyl "15 JavaScript Gotchas" , Which are at the introduction JavaScript programming practice usually prone to error or need to pay attention to place, and provide ways to avoid these traps, generally sp

  • "Layman's language and Ext JS" 2.19 First National

    "Layman's language and Ext JS" 2.19 First National "Layman's language and Ext JS" Since self-selection project, and at JavaEye Garden blog Well-known techniques, such as the community has attracted wide attention and a strong r

  • Rails January of dynamic

    More than a month before, Rails 2.2.2 released at the same time, the official immediately issued a statement, saying that Rails 2.3 is under development. See the news, while lamenting on the Rails Core Team, the progress is so compact, at the same ti ...

  • ruby MBARI large patches Performance Evaluation Report

    Before JavaEye news ruby memory leak culprit - the specter of a detailed analysis of the current pointer Ruby official version (MRI version) causes memory leaks. Brent Roman today issued a Super patch , Which contains 6 patch file to resolve the Ruby ...

  • hibernate study of the second

    Persistence of three main points: 1, a statement for persistent fields accessors (accessors) and whether the variable signs (mutators) Property statement is not necessarily required for the public's. Hibernate can be default, protected or private ...

blog comments powered by Disqus
Recent Entries
Tag Cloud
Random Entries