I’m actually having problems identifying the utility of execution plans when working with natively compiled procedures. Or, put another way, why bother? I’ve posted a couple of times on natively compiled procedures and SQL Server execution plans. I’ve found the differences interesting and enlightening, but I’m seriously questioning why I should bother, at least currently. I’m sure there will be many changes to the behaviors of the natively compiled procedures and their relationship with execution plans. But right now, well, let’s look at an example. I have three simple tables stored in-memory. Here’s the definition of one:
CREATE TABLE dbo.Address ( AddressID INT IDENTITY(1, 1) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 50000), AddressLine1 NVARCHAR(60) NOT NULL, AddressLine2 NVARCHAR(60) NULL, City NVARCHAR(30) COLLATE Latin1_General_100_BIN2 NOT NULL, StateProvinceID INT NOT NULL, PostalCode NVARCHAR(15) NOT NULL, ModifiedDate DATETIME NOT NULL CONSTRAINT DF_Address_ModifiedDate DEFAULT (GETDATE()) ) WITH ( MEMORY_OPTIMIZED= ON);
I can then create the following code as a natively compiled procedure:
CREATE PROC [dbo].[AddressDetails] @City NVARCHAR(30) WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english') SELECT a.AddressLine1, a.City, a.PostalCode, sp.Name AS StateProvinceName, cr.Name AS CountryName FROM dbo.Address AS a JOIN dbo.StateProvince AS sp ON sp.StateProvinceID = a.StateProvinceID JOIN dbo.CountryRegion AS cr ON cr.CountryRegionCode = sp.CountryRegionCode WHERE a.City = @City; END GO
When I call for an estimated plan (remember, no actual plans) I’ll get this:
If you click on it, you’ll note that there’s an index scan. But the costs are all zero. Everything is FREE! Or not. The execution time is 93ms. If I put an index on the City column, the execution plan changes to the one I showed previously, an index seek, and the execution time goes to 42ms. Clearly, the scans are costing something. Scans aren’t necessarily bad and seeks aren’t necessarily good, but it’s hard to spot issues with execution plans with no costing involved at all. Which makes me wonder, should we bothering with execution plans for the natively compiled procs? I’m honestly unsure.
For most query tuning, statistics matter a lot. I understand we still have room in Albany on July 25th. You can register here. I’m doing an all day session at SQL Connections in September in Las Vegas. Go here to register for this great event. In Belgium in October, I’ll be doing an all day session on execution plans at SQL Server Days. Go here to register for this event. I’d love to talk query tuning with you all day long.
As always: it depends !
If your environment urges you to squeeze the last ms (millisecond in this case 😉 ) out of it, your plan may show the actually used objects and guide you to the things you didn’t expect.
We’re talking InMemory, (hearsay) currently 20 – 70GB/Sec scan speed.
I would indeed love to see some statistics info in the SQLPlan too, but the dev team may have chosen to not show it all, due to the expected volatility of it all.
Great read!
Funny how this one was running in milliseconds. Last several times I tested it was microseconds. But yeah, I see the point. Honestly, I think we’re seeing an artifact of their speed to market. It’s also reflected in the size of the footprint of T-SQL code they support. They shipped it as is rather than add bells & whistles. That’s fine, but I’d like a bell or two to get added.