Around the Storage Block

HPE Storage Solutions for SAP HANA Native Storage Extension (Article 2 in 3-part series)

In my previous article, HPE Storage Solutions for SAP HANA Data Tiering, we saw how SAP HANA environments benefit immensely with the implementation of data tiering. In this blog we will talk about how HPE Storage enables you to utilize the native storage extension technology.

SAP HANA Native Storage Extension or NSE, is a tiering technology that helps segregate hot and warm data without the need for an external hardware tier.

Figure 1: SAP HANA Native Storage Extension with HPE StorageFigure 1: SAP HANA Native Storage Extension with HPE Storage


In principle, SAP HANA NSE extends the existing data volume in a SAP HANA system to store warm data. This frees up the memory for hot, mission, critical data. As seen in Figure 1, any SAP HANA implementation divides the main memory into two halves: the working area and that HANA hot data area. Once the HANA admin or the SAP application designates data into hot and warm, NSE flushes all the warm data to the underlying extended data volume. In addition, NSE creates a buffer cache in the HANA host data area to hold warm data temporarily. If the SAP application using HANA queries for warm data, it gets loaded page by page into this buffer cache. If subsequent queries are going to access these same warm data pages, they reside in the buffer cache for faster access and better query performance. NSE implementation does not impact the log volume sizing.

Column-loadable or page loadable data?
SAPHANA Native Storage Extension_ Blog_shutterstock_1061191298.pngHence, based on the load unit behavior, hot and warm data in NSE context, are called column-loadable and page-loadable data. Column-loadable data is that which gets loaded in the memory column wise. And the data that gets loaded in paged manner in the memory is termed as page-loadable data. This distinction can be implemented at a table, partition, column, or an index level. Here are the major benefits of using NSE:

1. Increased HANA database capacity. With NSE, more data can be stored with the same HANA infrastructure. All it requires is to extend the storage volume and designating data into column and page–loadable.
2. Performance and resource segregation. Column-loadable, mission-critical data should be the most performant in the database, and NSE assures that the main memory is utilized in achieving this. Memory resources should not get wasted in the data which is not frequently accessed.
3. Lower TCO. With no extra hardware required in terms of compute and networking. NSE enables you to lower your TCO per TB of data.

HPE Storage supports NSE with its leading storage products, namely, HPE Primera, HPE Nimble and HPE 3PAR systems. These are SAP HANA TDI certified and they can be used as is with NSE. As expected, the query performance for column-loadable data will be better than page-loadable data. This is because for the warm data, pages get loaded from the disk. However, once they have been loaded in the buffer cache, subsequent queries will perform the same as in-memory hot data.

As an example, here is how NSE helps increase the total database size for SAP HANA, for a server with 2 TB memory:

DB and storage sizing without NSE

NSE is employed with data divided in to page and column-loadable types

Total RAM


Total RAM


Work area

1TB (50% RAM)

Work area

1 TB (50% RAM)

HANA hot data


Buffer cache

200 GB

Data volume

1.2 x RAM =2.4 TB

HANA hot data

800 GB

Total DB size


HANA warm data

200 x 8 -16.8



Data Volume Size

2.4 TB + 1.6 T = 4TB



Total DB size

800 GB +1.6 TB =2.4 TB


This enables NSE to increase the total size of the database with the same amount of compute resources and just some additional storage.

For more details on the implementation, sizing and examples, please refer to the technical whitepaper I wrote or the Brighttalk session I gave on this topic.

These illustrate NSE implementation using TPC-H, an industry-standard decision support benchmark dataset. To use NSE, we first partition a table using date as the criteria into two partitions. The first partition is the data from the last two years, which holds the maximum value in most enterprises. The second partition holds all the remaining data prior to the last two years mark. This represents the usual data priority and value criteria that most enterprises have. Once the tables are partitioned, we assign the second partition to be page-loadable, making sure that the corresponding data gets shifted to the disk. The first partition, by default, remains column-loadable and in memory. Lastly, we show how SQL queries perform on both of these types of data.

Please - Do drop in a comment on how this solution fits you or your customer’s SAP environment and let us know if you have any comments or queries.

Until my next article -- Happy tiering!

Anshul Nagori
Hewlett Packard Enterprise


0 Kudos
About the Author


Meet HPE Blogger Anshul Nagori, Worldwide SAP Technical Marketing Engineer. Anshul works for the worldwide storage solutions team at HPE, and has more than 10 years of IT experience. He is a regular speaker at SAP and HPE events. His areas of focus include SAP HANA storage, data management, tiering, virtualization and data protection solutions. Connect with Anshul on LinkedIn!