I was recently asked if we are going to see performance differences if we explicitly drop temporary tables. I couldn’t remember the specifics, but I said it actually didn’t matter. However, that answer has bugged me, so I set up a quick test.
Explicitly Drop Temporary Tables
We could make this a crazy set of tests, but I wanted to keep things relatively simple. I created two procedures that create identical temporary tables. One drops the tables, the other doesn’t:
CREATE PROC dbo.BOMDrop AS CREATE TABLE #BOMData (BOMLevel SMALLINT, PerAssemblyQty DECIMAL(8, 2), ComponentName VARCHAR(50), ProductAssemblyName VARCHAR(50), UnitMeasure VARCHAR(50)); INSERT #BOMData (BOMLevel, PerAssemblyQty, ComponentName, ProductAssemblyName, UnitMeasure) SELECT bom.BOMLevel, bom.PerAssemblyQty, c.Name, pa.Name, um.Name FROM Production.BillOfMaterials AS bom JOIN Production.Product AS pa ON pa.ProductID = bom.ProductAssemblyID JOIN Production.Product AS c ON c.ProductID = bom.ComponentID JOIN Production.UnitMeasure AS um ON um.UnitMeasureCode = bom.UnitMeasureCode; SELECT COUNT(*) FROM #BOMData AS bd; DROP TABLE #BOMData; GO CREATE PROC dbo.BOMNoDrop AS CREATE TABLE #BOMData (BOMLevel SMALLINT, PerAssemblyQty DECIMAL(8, 2), ComponentName VARCHAR(50), ProductAssemblyName VARCHAR(50), UnitMeasure VARCHAR(50)); INSERT #BOMData (BOMLevel, PerAssemblyQty, ComponentName, ProductAssemblyName, UnitMeasure) SELECT bom.BOMLevel, bom.PerAssemblyQty, c.Name, pa.Name, um.Name FROM Production.BillOfMaterials AS bom JOIN Production.Product AS pa ON pa.ProductID = bom.ProductAssemblyID JOIN Production.Product AS c ON c.ProductID = bom.ComponentID JOIN Production.UnitMeasure AS um ON um.UnitMeasureCode = bom.UnitMeasureCode; SELECT COUNT(*) FROM #BOMData AS bd; GO
I then set up Extended Events to capture the query metrics and I executed each of the queries multiple times (also, just for the test, I discarded the results because I didn’t want that process mucking with my measurements). After executing both procedures 500 times, the results were quite simple. The average execution time with an explicit drop was 8,672 microseconds. Meanwhile, the average for not dropping the temporary table was 8,530 microseconds. That’s about a 1% difference across hundreds of executions. Reads were identical and so were writes.
In short, the behavior is the same.
What about the impact on the system? Could I see changes in memory or I/O as these different processes ran? Nope. Identical behaviors in all ways.
So what is happening?
I’m going to leave it to Paul White to explain the details, but, essentially, since SQL Server 2005, temporary objects are cached in a way that explicitly dropping (or truncating) doesn’t immediately remove them from cache. In short, they get cleaned up much the same way a temporary table that you don’t explicitly drop does.
Conclusion
I would not say that this means you shouldn’t drop a temporary table within code if you’re done with it. However, I would say that you do not need an explicit drop for all your temporary tables. I’d go with a simple rule. If you’re done with a temporary table and you have a lot more other processing to do, drop it. If you use a temporary table right to the end of a procedure, leave it there. It’ll get cleaned up in due course. Your drop isn’t going to affect things in that case.
I always explicitly drop temp tables in procs. It’s necessary for testing and maintenance, so removing the DROP statements doesn’t seem like a good idea.
It entirely depends on your code and the needs of it. Nothing I wrote above should change that. I just wanted to know how performance was affected.
I learned a long time ago that those cached versions of the temp table will be reused on the next execution of the proc, so it’s advantageous to leave them in place. This may account for the 1% improvement you noted in your post. Since SQL Server will clean up after itself, and there’s a possibility of better performance by leaving it, I opt for that.
I’m 100% with Allen on this, HOWEVER you also need to create those #temp tables in a way that SQL Server can actually reuse them. In short, no named constraints, and no ALTER TABLE statements afterwards. Bundle everything in the CREATE TABLE, with no naming, and you should set SQL Server up to reuse them.
No arguments from me on this. I generally leave them alone just because I’m lazy. Ha! However, I’m sure there are situations where dropping them explicitly is the correct answer. It just depends on the situation and, on testing to validate which works better in that given situation. As a general rule though, I’m good with leaving them for the cleanup. Especially based on the info that Tom added below.
I love it when folks test assumptions, it gets rid of cargo cult mentalities. Thanks!
You know me, I live to serve.
[…] Grant Fritchey looks at what difference explicitly dropping temporary tables in a procedure makes: […]
In their breakout session at Ignite last month, Pedro and Pam mentioned that you should NOT explicitly drop temp table sin code, as that COULD cause extra metadata contention. The link to the session is here: https://myignite.techcommunity.microsoft.com/sessions/64679, but you need a login to view. The comment is about 1:01:20 in, it’s during the part on tempdb optimization.
That’s EXCELLENT information. Thanks for adding to the discussion.
The video Tom linked is also now available on the MS Ignite YouTube page (with no login required):
https://www.youtube.com/watch?v=vKvnIa6S-nQ
I’m a DBA and I experienced the metadata contention problem in a high transaction environment with developer code that explicitly dropped temp tables. In a low volume environment where executions are not likely to overlap you won’t see it. So this is a very dangerous practice in a high volume environment. Be really careful doing this.
Just so I’m clear, by “this” you mean an explicit drop of the temporary table?
To me the best argument for dropping temp tables during the process is that it provides additional clarity to someone else reviewing and working with the code that the table is no longer necessary and needs not to be considered in succeeding processing.