How to improve memory efficiency for SQL Server Analysis Services

While SQL Server runs on Linux, the SQL Server Analysis Service (SSAS) is a different service intended for Microsoft SQL Server – so addressing the hardware configuration requires a bit of a different approach.

The memory, storage, and processor are going to be the primary components that need to be addressed for SSAS optimization, and we’re going to run you through some memory efficiency improvement tips. For more advice on improving SSAS memory, you can check out this guide from SentryOne.

The importance of memory in SSAS

Being a memory intensive service, SSAS will consume around 50 MB of memory even in idle state, and rapidly increase this amount as the load on the SSAS also increases. You have the ability to set a threshold limit of memory which you can control from the SSAS services, which you can configure from the General section found under the SSAS properties.

  • HardMemoryLimit: Allows you to set a limit for when the SSAS instance will begin to reject requests due to memory usage. The default value is set at 0, so the default threshold will be a midway between the TotalMemoryLimit value, and either the virtual address space or the physical memory, depending on which is smaller.
  • LowMemoryLimit: Controls the lowest limit that the SSAS will begin to release memory which is allocated to objects not frequently used.
  • TotalMemoryLimit: This will instruct the SSAS instance to more aggressively release memory and clear room for currently executing requests, as well as incoming high-priority requests, at the upper threshold. The limit will apply to both multidimensional and tabular instances. The default value is 80% of physical memory or virtual address space, whichever is smaller.
  • VertiPaqMemoryLimit: The lowest threshold that the SSAS will begin to release memory that was allocated for infrequently used objects, and also defines the level of memory consumption where memory paging should begin to operate, if memory paging to disk is enabled.

So in those above settings you can set either a percentage value or a specific value, but bear in mind that if the value is less than 100, it will be considered as a percentage, otherwise it will be the value entered – though percentages are the better option.

   

Another thing you can do is set the VertiPaqPagingPolicy property to 0. This will disable memory paging to disk for a tabular instance, which will give you out-of-memory errors when your memory is fully consumed. Setting it to 0 should give a performance boost, but you’ll need to find a balance with your memory needs.

Configuring your memory limits

While SSAS offers configurable memory limits for the purpose of streamlining how it handles instances of both high and low total system memory, it’s important to know that running SSAS on the same machine as a relational SQL server can create issues as both environments will compete for overall memory resources. You’ll also want to be aware of the cleaning process and its impact on performance.

When it comes to Tabular mode, you’ll want to give the most attention to high, low, and VertiPaq memory limits. While low memory is default 65% of the physical memory on the server, the high memory defaults to 80% of the physical memory. And when low memory is exceeded, the SSAS cleaner will start to remove old objects from the cache, but if the total memory limit becomes exceeded, the cleaner will become a lot more aggressive in this process.

VertiPaq defaults to 60% of the total physical memory, but is defined in the SSAS paging policies – so when it comes to Tabular mode, setting the VertiPaq will define the specific in-memory storage engine memory limits.

So you can allocate more space to memory within those above three settings but bear in mind that if you set the allocations above 100, it will convert to bytes.

Polish your columns

The overall data model size has an impact on memory efficiency, so a good thing to do is limit the columns within the data model. If you exclude unnecessary columns from the relational data source, this will save memory during columnar compression.

Another thing to do is limit cardinality where possible. If you have a lot of columns with unique data, these are going to reduce memory efficiency. By splitting or modifying these unique data columns with high cardinality, such as DateTime columns where you remove unnecessary Time elements, or splitting them into date and time columns, you will decrease the cardinality and increase compression.

   

Leave a Reply

Your email address will not be published.