Top 10 New Features Of SQL Server 2019
It is no secret to anyone that software release cycles are more aggressive than ever. Now that the cloud has become ubiquitous as a strategic component of IT services, we are spoiled by a constant release of new features and services available to us. It follows then that on-premises or “retail-delivered” software is also getting more frequent releases as customer expectations grow. This is how over the last four years we have had three major releases of SQL Server: 2016, 2017 (with the big milestone of SQL on Linux) and now SQL 2019.
SQL 2019 is shaping up to be a very powerful evolutionary step to the features that were introduced in SQL 2017, as well as bringing its own set of brand new capabilities that continue to position the product as a “one-stop” software platform for all your data needs. With this in mind, and considering how much is built into the product nowadays, I have compiled this list of my top favorite features of SQL 2019! Now, of course, this is my blog and my opinion and the list is not exhaustive. For the full list, refer to the Microsoft documentation. As you will see, my favorites are mostly technical features that will help developers and DBAs in their daily functions. Also, a quick disclaimer – all the images are from Microsoft’s own SQL Server documentation.
I am sure that your top 10 is likely to be very different than my top 10 so make sure you check it all out in detail! Here is my list:
1. Intelligent Query Processing Enhancements
What they are: This is a set of enhancements that affect the behavior of the Query Optimizer, the component inside SQL Server that generates the execution plans for queries. This includes dynamic memory grants for rowstore tables, table variable deferred compilation, batch mode on rowstore and more.
Why this matters: These are all behind-the-scenes improvements on the Query Optimizer that will improve the quality of the plans for all applicable queries. This means better performance overall after doing the upgrade.
Cost of adoption: Test your problem queries on a development instance to verify the improvements. That’s pretty much it. It’s one of those great improvements that works with no big changes required from the customer (I really like those as you will see from this list).
Why this matters: These are all behind-the-scenes improvements on the Query Optimizer that will improve the quality of the plans for all applicable queries. This means better performance overall after doing the upgrade.
Cost of adoption: Test your problem queries on a development instance to verify the improvements. That’s pretty much it. It’s one of those great improvements that works with no big changes required from the customer (I really like those as you will see from this list).
2. Accelerated Database Recovery (ADR)
What this is: This is a completely new way for SQL Server to perform database recovery in the event of a transaction rolled back, an instance restart or an Availability Group failover. Instead of the sometimes unpredictable and less than desired time spent waiting for the database recovery to run, the SQL team has redeveloped how recovery works and has dramatically decreased how long this process takes.
Why this matters: Anyone who has had to wait for a production SQL Server instance to rollback a long transaction or who has had an unfortunate crash during a large data operation knows how painful it is to just wait for recovery to be finished so you can get back in business. ADR will provide benefits for SLA and RTO all around.
Cost of adoption: None. Activate it and enjoy (one of those again!).
Why this matters: Anyone who has had to wait for a production SQL Server instance to rollback a long transaction or who has had an unfortunate crash during a large data operation knows how painful it is to just wait for recovery to be finished so you can get back in business. ADR will provide benefits for SLA and RTO all around.
Cost of adoption: None. Activate it and enjoy (one of those again!).
3. AlwaysEncrypted With Secure Enclaves
What this is: This is the next version of AlwaysEncrypted, the encryption technology introduced in SQL Server 2016 that allows transparent column encryption without giving administrators access to the decryption keys. One drawback of the first implementation is that due to SQL Server not being able to decrypt the data, the queries on the SQL side couldn’t do any computations or manipulation of the actual column values. Using the new Secure Enclaves technology, SQL Server can now securely encrypt a portion of memory to perform computations on these encrypted columns without the unencrypted values ever being exposed to the rest of the processes (or administrators).
Why this matters: Security matters and performance matters, as well. Database servers are best equipped for processing large amounts of data so being able to have AlwaysEncrypted and also do complex manipulations is the best of both worlds.
Cost of adoption: If you are already using AlwaysEncrypted then no big changes are necessary other than reconfiguring and re-encrypting the columns of interest. If you are not using AlwaysEncrypted, then now is a good time to investigate, test this feature and see if it’s a good fit for your security requirements.
Why this matters: Security matters and performance matters, as well. Database servers are best equipped for processing large amounts of data so being able to have AlwaysEncrypted and also do complex manipulations is the best of both worlds.
Cost of adoption: If you are already using AlwaysEncrypted then no big changes are necessary other than reconfiguring and re-encrypting the columns of interest. If you are not using AlwaysEncrypted, then now is a good time to investigate, test this feature and see if it’s a good fit for your security requirements.
4. Memory-Optimized Tempdb Metadata
What this is: The SQL team has made optimizations to the tempdb code so that some of the metadata that can be a bottleneck on tempdb heavy systems can rely completely on memory and be optimized for RAM access.
Why this matters: Large volume, large scale environments that use a lot of tempdb run into this type of bottleneck. Usually, this requires some sort of refactoring to alleviate the use of tempdb. With this feature in place, it is possible to enable the metadata to sit in memory and be optimally accessed. This way will get rid of these issues.
Cost of adoption: Activate the feature and verify there is an improvement, not much more than that (another one!).
Why this matters: Large volume, large scale environments that use a lot of tempdb run into this type of bottleneck. Usually, this requires some sort of refactoring to alleviate the use of tempdb. With this feature in place, it is possible to enable the metadata to sit in memory and be optimally accessed. This way will get rid of these issues.
Cost of adoption: Activate the feature and verify there is an improvement, not much more than that (another one!).
5. Query Store Custom Capture Policies
What this is: Query Store is a great performance tuning and trending tool that allows for storing, measuring, and fixing plan regressions inside a SQL Server database. One downside of using it though is that sometimes it can store too much information, even for queries that the DBA might not be interested in or for queries that were part of a system utility or monitoring tool. This new capability of custom policies means you can fine-tune exactly which queries should be tracked based on their execution statistics such as how often they run, the CPU they consume, and more.
Why this matters: Query Store is a great feature but it is not so useful if it consumes a lot of resources and if it’s too bloated to be effective for the DBA. This feature will allow fine-tuning so that it is always efficient and lean and easy to use for fixing plan issues.
Cost of adoption: You will need to sit down and see what type of execution conditions you want to use as a filter for your Query Store. Implementing it is just a matter of using new syntax after that.
Why this matters: Query Store is a great feature but it is not so useful if it consumes a lot of resources and if it’s too bloated to be effective for the DBA. This feature will allow fine-tuning so that it is always efficient and lean and easy to use for fixing plan issues.
Cost of adoption: You will need to sit down and see what type of execution conditions you want to use as a filter for your Query Store. Implementing it is just a matter of using new syntax after that.
6. Verbose Truncation Warnings
What this is: Every single T-SQL developer knows the pain and grind of getting a truncation error. Some value somewhere does not fit under a new data type, but you don’t get any details at all. Then it’s a matter of trial and error until you finally figure out which value is the offending one. Not the best experience for what should be a straightforward issue to solve!
Why this matters: Because mental sanity matters. These new messages give you all the details of the data truncation issue so you can just fix it and get on with your day!
Cost of adoption: None. It’s the new default (loving all of these)!
Why this matters: Because mental sanity matters. These new messages give you all the details of the data truncation issue so you can just fix it and get on with your day!
Cost of adoption: None. It’s the new default (loving all of these)!
7. Resumable Index Build
What this is: SQL Server now has the capability to stop an index rebuild operation in progress, keep the work that has been done so far and resume at some other point in time.
Why this matters: For some folks, index rebuilds are still necessary and they consume so many resources that even with the ONLINE option, they still have to deal with the reality of maintenance windows. The problem, however, is what happens if you run out of time during your maintenance window? Previously, you would need to cancel your rebuild, wait for a potentially long recovery and then start again from scratch. This new feature gets rid of these problems!
Cost of adoption: Change your index scripts to use the new RESUMABLE option. Pretty easy (and another one!).
Why this matters: For some folks, index rebuilds are still necessary and they consume so many resources that even with the ONLINE option, they still have to deal with the reality of maintenance windows. The problem, however, is what happens if you run out of time during your maintenance window? Previously, you would need to cancel your rebuild, wait for a potentially long recovery and then start again from scratch. This new feature gets rid of these problems!
Cost of adoption: Change your index scripts to use the new RESUMABLE option. Pretty easy (and another one!).
8. Data Virtualization With Polybase
What this is: Polybase is SQL Server’s module that allows fast and parallel T-SQL queries that can go out into external storage (usually HDFS on-prem) and bring the results seamlessly as a T-SQL result set. With SQL 2019, Polybase is getting expanded to support Oracle, Teradata, MongoDb and more.
Why this matters: Data integration is always a challenge and with ever-growing data sets, performance can become an issue. Trying to query and move large amounts of data through a linked server has always been painfully slow as it is not really optimized for this job. Polybase allows SQL Server to become the data hub of an organization by leveraging T-SQL skills and keeping performance as a top priority at the same time.
Cost of adoption: This one definitely requires some work as you would want multiple Polybase nodes, set up the connectivity to your other database platforms and then test the performance of those queries.
Why this matters: Data integration is always a challenge and with ever-growing data sets, performance can become an issue. Trying to query and move large amounts of data through a linked server has always been painfully slow as it is not really optimized for this job. Polybase allows SQL Server to become the data hub of an organization by leveraging T-SQL skills and keeping performance as a top priority at the same time.
Cost of adoption: This one definitely requires some work as you would want multiple Polybase nodes, set up the connectivity to your other database platforms and then test the performance of those queries.
9. Last Actual Execution Plan DMF
What this is: This is a new Dynamic Management Function called sys.dm_exec_query_plan_stats that will track the last ACTUAL execution plan for a query if you enable the lightweight query profiling feature (which you probably should do).
Why this matters: Previously, grabbing an actual query plan required either a Profiler trace, an XEvents trace, or a call to an ephemeral DMF that would lose its contents when the query was done executing. Not the easiest or most convenient mechanisms to do what is pretty much a critical step in any sort of production performance problem scenario.
Cost of adoption: Enable the setting, use the DMF (and another one to just use!).
Why this matters: Previously, grabbing an actual query plan required either a Profiler trace, an XEvents trace, or a call to an ephemeral DMF that would lose its contents when the query was done executing. Not the easiest or most convenient mechanisms to do what is pretty much a critical step in any sort of production performance problem scenario.
Cost of adoption: Enable the setting, use the DMF (and another one to just use!).
10. Multiple Internal Performance Improvements
What this is: There are multiple internal performance improvements done by the SQL team for this release. There is a new index optimization for indexes that have a sequential key, temp table recompilation improvements, improved indirect checkpoint scalability, and more.
Why this matters: These are all performance improvements that come “out-of-the-box”, optimize common SQL Server processes and require no effort from the client to benefit from them.
Cost of adoption:
Why this matters: These are all performance improvements that come “out-of-the-box”, optimize common SQL Server processes and require no effort from the client to benefit from them.
Cost of adoption:
No comments:
Post a Comment