Buffer Pool Extension in SQL Server 2014 part 4: benchmark testing for update operations

One of my previous articles was devoted to carrying out a range of tests to compare speed of data read operations using BPE. But in real life we need not only read but also update data, that’s why I decided to check whether BPE has influence on performing update operations. Before running new tests we should take into account several crucial features. First of all insert, update and delete operations are logged, which means transaction log will be under load and a lot will depend on the disk subsystem. That’s why I made a pair of test measurements and then put the transaction log on a SSD to minimize its impact on the update operations. Another moment is that before data update SQL Server reads the page from Buffer Pool, then makes changes and marks the page as “dirty». Later the page is written to the disk through performing Checkpoint or Lazy Writer processes, and the first of mentioned processes may also affect the test results indirectly, so I turn it off. Thirdly, I write a query to change the data so that new data will be of the same size as the old data, which allows to prevent pages split. Fourth, Buffer Pool Extension can store only “clean” pages, consequently, update operations will “wash out” the data from it. But I will provide more details on this situation later and now let’s have a look at the configuration of the test bench.

For testing I will use a virtual machine with 4 GB RAM again. I created a separate database with one table. The table has 2 columns [id] and [n]. The first column contains integer data type. This column is the primary key with a clustered index on it. The second column simply presents any data in the system.


use [master]; go create database [BufferPoolExtension_test]; go use [BufferPoolExtension_test]; go create table [dbo].[TestTable] ( [id] int identity(1,1) not null, [n] [char](1000) not null, constraint [PK_TestTable] primary key clustered ([id]) ); go insert into [dbo].[TestTable] ([n]) values (replicate('A', 1000)); go 1000000

For this test I will carry out 10 000 random requests to update one row of data. Such type of load can hardly occur in real life, often it goes together with random read of data, but now we need to check how update operation works alone. I will perform the mentioned request several times and then calculate the arithmetic average of the results:

update [dbo].[TestTable] set [n] = REPLICATE(char(65 + cast(rand(checksum(newid())) * 26 as tinyint)), 1000) where [id] = cast(rand(checksum(newid())) * 1000000 as int) + 1; go 10000

The result of the 1st test (when all the data are in memory and BPE is off) is 2.7 seconds. This is a so-called ideal situation, when we have sufficient amount of memory and all data are in the cache.

To carry out next test I will change a query by adding a command to remove all buffers from the Buffer Pool. This allows us to look into the worst possibility when there is no cache data.

dbcc dropcleanbuffers; go update [dbo].[TestTable] set [n] = REPLICATE(char(65 + cast(rand(checksum(newid())) * 26 as tinyint)), 1000) where [id] = cast(rand(checksum(newid())) * 1000000 as int) + 1; go 10000

In the 3rd test I will limit the maximum size of memory that SQL Server can use to 256 MB which is almost 4 times less than the size of the data. Then I make the test. I should note here that during the test max 90 MB of data (that is only about 8% of all data in the table) are in the cache. The result is 2 min 29 sec.

In the 4th test I will enable Buffer Pool Extension of 4 GB and perform the whole table scan to be sure that all data are either in memory or in the BPE. It’s worth noting that almost all data are stored in the BPE, which means SQL Server tries to put all clean pages in the BPE and not to keep them in memory.  And here is the most interesting part of the story…Execution time of the test has varied significantly: first it was 22 seconds, then suddenly jumped to 40 seconds, then to 55 seconds, after a couple of starts it went up to 1 minute and finally stopped. At that moment about 75% of all data were in the BPE and 8% were in memory. Next loads kept on “washing out” the cache, but the execution time has not changed much – on the average it was 1 minute 15 seconds. After some tests I tried to execute Checkpoint manually to see whether it can move previously recorded to the disk  «dirty» data into the BPE and it can, but, unfortunately, too slowly. But in case you start the load simulating random read at the same time, the data will move into the BPE again.

And for the last test I will store the whole database on a SSD, disable the BPE and clean the cache before each start. The result is 7 seconds.

The table below contains all results:

Description of the test

Result, time


All data are stored in memory

2.7 sec.


All data are stored on disk (cold cache, no data in memory).

2 min. 32 sec.


The size of RAM for SQL Server is limited. About 8% of all data are stored in the cache.

2 min. 29 sec.


The size of RAM is limited as in test №3, but now BPE is enabled and almost all data are stored in the BPE.

1 min. 15 sec. *


The database is stored on a SSD (cold cache, no data in memory).

7 sec.

On the basis of tests I can make the following conclusion: using BPE can speed up the random data update providing that the memory is not enough for a full table cache and data are already in the BPE. The speed-up is not as high as for data read operations. But taking into account that OLTP systems are generally characterized by fewer update operations in comparison with the number of read operations the performance will increase noticeably under mixed load (e.g. 80% of read and 20% of update).

All the articles about Buffer pool extension:

Buffer Pool Extension in SQL Server 2014
Buffer Pool Extension in SQL Server 2014 part 2: benchmark testing
SQL Server 2014 Buffer Pool Extension part 3: system monitoring
Buffer Pool Extension in SQL Server 2014 part 4: benchmark testing for update operations