While I have not yet signed the contract, I have submitted an outline and proposal for a new version of my book on query performance tuning. Most of the information in the existing book is still very valid and immediately applicable. However, some of the information is out of date. Other pieces can be tweaked to tell a better story. A little bit of it is just wrong or has aged out of applicability. Because of all this, I’m not simply going to update the existing book. Instead, this time, it’s a complete, from scratch, rewrite. All the way.
I’m planning to drop entirely the chapters on hardware. I’m doing this for a bunch of reasons. One, hardware has changed radically over the years. Of all the information in the book, these chapters have aged the most. Two, frankly, I don’t do a great job on the hardware information because it’s not my area of expertise. So, instead of giving out information that might be dubious, I’m chucking it. Three, with the insane disparity of approaches to hosting your databases and servers these days, from Linux, to the cloud platforms, to containers, and good old fashioned big iron, I won’t be able to cram enough information in there to be helpful without making the book about six times it’s size.
Nope, instead, I’m going to focus down on just queries, query tuning, internals, execution plans, indexes, statistics, code, code, code, code and maybe a little T-SQL code.
Watch these pages for updates as we go along. Also, please, feel free to tell me what’s wrong with the current edition of the book. Share with me your favorite tuning tips and maybe they’ll show up in the book (with appropriate attribution, of course).
There goes any semblance of free time in the evenings. Ha!
This is great news. I value your book and go back to it from time to time
Thanks! As far as I know, the majority (and I mean 90% +) of the 2017 version of the book is still completely applicable, so it should still be helfpul.
There are a few basics I’m starting with nowadays. First one: how much data are you selecting? Because 3 million rows for your report? Good luck thinking of a filter or aggregate before you come back to me. 100 million records for your ETL? Show me your business case for those numbers.
Second, what are your data types? Trying to get an age into a Bigint? Or a three character type code into a nvarchar(2000). Good luck redesigning your ETL/ELT process first.
This is off course the blunt, short version that only the most stubborn, hardhearing people get to hear. Most of the times these issues are fixed and I can dive into my own metrics (disk latency, wait stats) and the cool stuff query store has on offer.
Yep. The hard part is always the people.
Let us know when hard copies are available!