Does generating an Estimated Plan cause that plan to be loaded into the plan cache? No. What? Still here? You want more? Proof? Fine. Let's first run this bit of code (but please, not on your production server): DBCC FREEPROCCACHE(); That will remove all plans from cache. Now, let's take this query and generate an Estimated Plan (CTL-L from your keyboard or by clicking on the "Display Estimated Execution Plan" button on the toolbar): SELECT * FROM Production.ProductModel AS pm; This will generate a trivial plan showing a scan against the Production.ProductModel table. Now, let's run another query: SELECT deqs.plan_handle FROM sys.dm_exec_query_stats AS deqs CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest WHERE dest.text = 'SELECT * FROM Production.ProductModel AS pm;'; That's just an easy way to see if a plan_handle exists.…
Simple paramaterization occurs when the optimizer determines that a query would benefit from a reusable plan, so it takes the hard coded values and converts them to a parameter. Great stuff. But... Let's take this example. Here's a very simple query: SELECT ct.* FROM Person.ContactType AS ct WHERE ct.ContactTypeID = 7; This query results in simple parameterization and we can see it in the SELECT operator of the execution plan: We can also see the parameter that was defined in use in the predicate of the seek operation: Hang on. Who the heck put the wrong data type in there that's causing an implicit conversion? The query optimizer did it. Yeah. Fun stuff. If I change the predicate value to 7000 or 700000 I'll get two more plans and I…
There's an old joke that goes, "Doctor, doctor, it hurts when I do this." While the person in question swings their arm over their head. The doctor's response is, "Don't do that." Problem solved, right? Well, maybe not. Let's take a quick example from life. I do crossfit (yeah, I'm one of those, pull up a chair I'll tell you all about my clean & jerk progress... kidding). I've been experiencing pain in my shoulder. "It hurts when I do this." But, I'm not going to stop. I've been working with my coach to identify where the pain is and what stretches and warm-ups I can do to get around it (assuming it's not a real injury, and it isn't). In short, we're identifying the root cause and addressing the…
Statistics are one of the single most important driving factors for the behavior of the query optimizer. The cardinality estimates stored within the statistics drive costing and costing drives the decision making of the optimizer. So, how does this work with the new SQL Server 2014 natively compiled procedures? Differently. In-memory tables do not maintain their statistics automatically. Further, you can't run DBCC SHOW_STATISTICS to get information about those statistics, so you can't tell if they're out of date or not or what the distribution of the data is within them. So, if I create some memory optimized tables, skip loading any data into them and then run this standard query: SELECT a.AddressLine1, a.City, a.PostalCode, sp.Name AS StateProvinceName, cr.Name AS CountryName FROM dbo.Address AS a JOIN dbo.StateProvince AS sp ON sp.StateProvinceID =…
I'm actually having problems identifying the utility of execution plans when working with natively compiled procedures. Or, put another way, why bother? I've posted a couple of times on natively compiled procedures and SQL Server execution plans. I've found the differences interesting and enlightening, but I'm seriously questioning why I should bother, at least currently. I'm sure there will be many changes to the behaviors of the natively compiled procedures and their relationship with execution plans. But right now, well, let's look at an example. I have three simple tables stored in-memory. Here's the definition of one: CREATE TABLE dbo.Address ( AddressID INT IDENTITY(1, 1) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 50000), AddressLine1 NVARCHAR(60) NOT NULL, AddressLine2 NVARCHAR(60) NULL, City NVARCHAR(30) COLLATE Latin1_General_100_BIN2 NOT NULL, StateProvinceID INT…
All the wonderful functionality that in-memory tables and natively compiled procedures provide in SQL Server 2014 is pretty cool. But, changes to core of the engine results in changes in things that we may have developed a level of comfort with. In my post last week I pointed out that you can't see an actual execution plan for natively compiled procedures. There are more changes than just the type of execution plan available. There are also changes to the information available within the plans themselves. For example, I have a couple of stored procedures, one running in AdventureWorks2012 and one in an in-memory enabled database with a few copies of AdventureWorks tables: --natively compiled CREATE PROC dbo.AddressDetails @City NVARCHAR(30) WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL…
Ever had that moment where you start getting errors from code that you've tested a million times? I had that one recently. I had this little bit of code for pulling information directly from query plans in cache: WITH XMLNAMESPACES(DEFAULT N'http://schemas.microsoft.com/sqlserver/2004/07/showplan'), QueryPlans AS ( SELECT RelOp.pln.value(N'@PhysicalOp', N'varchar(50)') AS OperatorName, RelOp.pln.value(N'@NodeId',N'integer') AS NodeId, RelOp.pln.value(N'@EstimateCPU', N'decimal(10,9)') AS CPUCost, RelOp.pln.value(N'@EstimateIO', N'decimal(10,9)') AS IOCost, dest.text FROM sys.dm_exec_query_stats AS deqs CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp CROSS APPLY deqp.query_plan.nodes(N'//RelOp') RelOp (pln) ) SELECT qp.OperatorName, qp.NodeId, qp.CPUCost, qp.IOCost, qp.CPUCost + qp.IOCost AS EstimatedCost FROM QueryPlans AS qp WHERE qp.text = 'some query or other in cache' ORDER BY EstimatedCost DESC; I've probably run this... I don't know how many times. But... I'm suddenly getting an error: Msg 8114, Level 16, State 5,…
No, I don't mean the use of sp_updatestats is not smart. It's a fine, quick mechanism for getting statistics updated in your system. But the procedure itself is not smart. I keep seeing stuff like "sp_updatestats knows which statistics need to be updated" and similar statements. Nope. Not true. Wanna know how I know? It's tricky. Ready? I looked at the query. It's there, in full, at the bottom of the article (2014 CTP2 version, just in case yours is slightly different, like, for example, no Hekaton logic). Let's focus on just this bit: if ((@ind_rowmodctr <> 0) or ((@is_ver_current is not null) and (@is_ver_current = 0))) The most interesting part is right at the front, @ind_rowmodctr <> 0. That value is loaded with the cursor and comes from sys.sysindexes and the rowmodctr column…
Earlier this week I introduced the concept of Managed Backups (and caused less of a turmoil than I thought I would). Now I want to show you how it works. It's really simple and quite well documented. Before you get to the, insanely simple, task of actually enabling Managed Backup, you will need to go through the prerequisites. First, and this should be obvious, but I'll state it, just in case, you need to set up an Azure storage account. That's so insanely straight forward that I'm not going to say more. Then, you have to set up encryption on your system. I used these commands to prep it: CREATE MASTER KEY ENCRYPTION BY PASSWORD = '$qlserver2012queryperformancetuning'; CREATE CERTIFICATE CloudDojoCert WITH SUBJECT = 'Backup Encryption Certificate'; Again, shouldn't have to…
It's kind of fun to see Azure development artifacts on display. I've posted about them before, a couple of times. I'm starting to finally get systematized about the whole thing, just so I can see stuff as it changes rather than discover them by accident or get told about them by someone else. Here's a little query I'm running to see when system views were last modified: SELECT av.name, av.create_date, av.modify_date FROM sys.all_views AS av ORDER BY av.modify_date DESC; The most recent stack of changes are here: I'll keep an eye on them to see what I can spot about interesting new functionality. I also compared the listing of all views in Azure to those on a SQL Server 2012 instance and came up with a list of differences. These…