What? Execution plans and the GDPR? Is this it? Have I completely lost it? Well, no, not on this topic, keep reading so I can defend myself. GDPR and Protected Data The core of the GDPR is to ensure the privacy and protection of a "natural person's" information. As such, the GDPR defines what personal data is and what processing means (along with a bunch of additional information). It all comes down to personally identifying (PI) data, how you store it, and how you process it. More importantly, it's about the right for the individual, the natural person, to control their information, up to and including the right to be forgotten by your system. OK. Fine. And execution plans? Execution Plans and PI Data If you look at an execution…
All the execution plans are estimated plans. All of them. There fundamentally isn't any such thing as an "Actual" plan. Where Do You Get Execution Plans? There are a lot of sources for execution plans. You can capture them using extended events (or, if you must, trace). You can capture them through the Management Studio gui. You can also capture them from the SQL Operations Studio gui. You can query the cache through the DMVs and pull them in that way. You can look at plans in query store. All these resources, yet, for any given query, all the plans will be identical (assuming no recompile at work). Why? Because they're all the same plan. Each and every one of them is an estimated plan. Only an estimated plan. This…
In Azure SQL Database for quite some time and now available in SQL Server 2017, Microsoft has put a lot of the knowledge they've gleaned from running more databases that any of the rest of us ever will to work with Automatic Tuning. Automatic Tuning The core of automatic tuning at this point in time (because I'm sure it's going to evolve) is the ability of the query engine to spot when a query has generated a new plan and that new plan is causing performance to degrade. This is known as a regression in the plan. It comes from bad parameter sniffing, changes in statistics, cumulative updates, or the big notorious one, the cardinality estimator introduced in SQL Server 2014 (it's been almost four years, I'm not calling it…
There's a war on in the SQL Server world. On the one side is Profiler (although, really, everyone uses Trace Events). On the other, the "new" (they came out in 2008 with a full GUI in 2012, so...) Extended Events. Lots of people have picked sides on this, including Microsoft. New Trace Events There are none. All the new functionality of every sort from Availability Groups to Query Store to R & Python, have Extended Events created for them. Trace Events, and the technologies supporting them in the form of Profiler, are a dead end. Don't fear. While Trace is on the deprecation list, there doesn't appear to be any fear of that technology being removed completely. At least it won't be removed in the foreseeable future. A future which,…
OK guys. I think it's way past time. A bunch of us have been keeping a secret from the rest of you. We know something that you don't. I don't think we should hide this secret from the world any more. Illuminati? Incompetents. Free Masons? I am one, so I already know all those secrets. Bilderbergers, Cthulhu Cultists, MKUltra, New World Order, Rotarians? All of these are nothing compared to the vast conspiracy that I'm about to reveal. We need to just unveil the magic "Run Really Fast" button. We've been keeping that sucker a secret forever. It's been tough. Every so often some unauthorized person almost finds it or a "query tuning expert" (as if that was a real thing) tries to reveal it. But we've kept it secret…
One of my favorite things about being a technologist is constantly learning new things, but, this can lead us to forget about the fundamentals. More importantly, in our pursuit of the latest and greatest things, it's very easy for those of us who teach to forget to reach back and pull others forward. With this in mind, I'm going launch a new blog series called Database Fundamentals. Database Fundamentals The goal here is simple. I'm going to talk about the basics. Creating a database. Creating tables. Inserts, selects, primary keys, and on and on. I have a bunch of material accumulated around these topics. I may as well share it as much as I can. I will continue posting information about all the fun cutting edge stuff I get to…
One question constantly comes up; What should the Cost Threshold for Parallelism be? The default value of 5 is pretty universally denigrated (well, not by Microsoft, but by most everyone else). However, what value should you set yours to? What Do Your Plans Cost? I have a question right back at you. What do your plans currently cost? Let's say, for argument's sake, that all your plans have an estimated cost (and all plan costs are estimates, let's please keep that in mind, even on Actual plans) value of 3 or less. Do you need to adjust the cost threshold in this case? Probably not. But the key is, how do you look at the costs for your plans? Unfortunately, there isn't a property in a DMV that shows this value. Instead,…
While presenting recently and talking about dealing with bad Parameter Sniffing, I got the question; what happens to OPTIMIZE FOR hints when parameter sniffing is disabled? This is my favorite kind of question because the answer is simple: I don't know. Parameter Sniffing For those who don't know, parameter sniffing is when SQL Server uses the precise values passed into a query as a parameter (this means stored procedures or prepared statements) to generate an execution plan from the statistics using the value from the parameter. Most of the time, parameter sniffing is either helping you, or is not hurting you. Sometimes, parameter sniffing turns bad and hurts you quite severely. Usually, but not always, this is because you either have severely skewed data (some data is very different than the rest, lots…
There are a bunch of ways you could create a database clone. Backup and restore is one method. Export/Import is another. There are even third party tools that will help with that. However, each of these has a problem. It's moving all the data, not just once, but twice. You move the data when you export it and you move it again when import it (same thing with backup and restore). That makes these methods slow for larger databases. How can you create a database clone without moving the data multiple times? Don't Move the Data At All New with SQL Server 2016, expanded in SP1, and added to SQL Server 2014 SP2 is a new command, DBCC CLONEDATABASE. This is like a dream come true. The use is extremely…
Monday I got in on Sunday and chose to have a small dinner with a couple of friends, quiet, preparing. Monday was a less hectic day than the others . The Board had the morning off, although Redgate had me go and give a session at an event. Monday afternoon was one of our three in-person board meetings. The minutes will be published soon. I was responsible for running the meeting. I also presented two topics, first, and most importantly, our current financial status. Then I presented the initial set of thoughts towards some SMART goals for Global Growth, which I will share once they are further developed . Monday evening I had two events I had to attend. First, as part of the Executive Committee, I attended the kick off dinner…