I'm at the just barely scratching the surface level of getting started with AWS Deployment Pipelines. Of course, the first thing I want to do with them is get a database deployed. A couple of web searches and I find this bit of documentation from the AWS team. Perfect. Not only is this using AWS tools all the way from Commit (source control) to Build (automation) to Deploy (pipelines), but it's using Flyway for the magic sauce of the database deployment (database deployments need magic sauce). Because I'm just learning, it actually took me two days to get to the point where this code was working. Or rather, where it was supposed to work. There's one small bit missing or changed since that article was published. If you're attempting this,…
The general idea for this question came from dba.stackexchange.com: could we, and if we can, how, get row counts after execution. I was intrigued with the idea, so I ran some tests and did a little digging. I boiled it all down in the answer at the link, but I figured I could share a little here as well. Properly Retrieve Row Counts After Execution The right way to do this is obvious and simple. Before you need it, set up an Extended Events session. Done. The only question is what goes into the Session. First blush, sql_batch_completed and/or rpc_completed. Both will return a rows affected value. Although, interestingly, the row_count value is documented as rows returned. However, it's both. But, if you really want to get picky, batches and…
I know I'm a weirdo. I've always been a weirdo. When I was a DBA (now I only play one on TV), I was a weirdo too. Case in point, ORM tools. Whether we're talking nHibernate, Linq, or Entity Framework, the degree of loathing for these tools by most DBAs is really hard to measure. Yet, after an initial period of difficulty (here are some ancient blog posts documenting that pain), I've come to believe that code generation tools are a very important part of what we do. Further, that they are not evil, or wrong, or bad. Let's talk about this just a little. A Tale of Two Teams At my previous employer there was a degree of friction between the developers and the DBAs (shocking, right). Both sides…
I wrote a short blog post about the misperception that Profiler was easier than Extended Events when it came to the core concept of "click, connect, BOOM, too much data". Go read it if you like, but I don't think it's actually an effective argument for how much easier Extended Events is than Profiler. Here, we're going to drill down on that concept in a real way. Let's start with a little clarification. I'm going to be a little lazy with my language. Trace is a scripted capture of events on a server. Profiler is a GUI for consuming a Trace, either live or from a file, and for creating Trace events. However, almost everyone refers to 'Profiler' when they mean either Trace or Profiler. I may do the same…
The first time you see a new execution plan that you're examining to fix a performance problem, something broken, whatever, you should always start by looking at the first operator. First Operator The first operator is easily discerned (with an exception). It's the very first thing you see in a graphical execution plan, at the top, on the left. It says SELECT in this case: This is regardless of how you capture the execution plan (with an exception). Whether you're looking at an execution plan from the plan cache, Query Store, or through SSMS, the execution plan, regardless of complexity, has this first operator. In this case, it says UPDATE: If you get an execution plan plus runtime metrics (previously referred to as an "actual" execution plan), you'll still see…
If you're working with the Microsoft Data Platform, you should be, at the least, exploring Azure Data Studio as a new tool in your toolbox. One of the big reasons for this is the inclusion of Jupyter Notebooks. For those who don't know, Jupyter Notebooks are an open source documentation tool that lets you combine text and pictures with live code. From this we can talk about runbooks that you can share with people, lessons in combination with videos, presentations, interactive software documentation and lots more. I'm myopically focused at the moment on Azure Data Studio, but there are a lot of other places and ways to create or consume notebooks. However, I'm going to keep my focus. The issue I'm running into, is distributing the notebooks. Where to go…
For those who don't know, last week was the PASS Summit. It's an amazing event every year, but this last week, I saw a ton of indications that our peers are spotting the changing technology landscape largely defined by three tools, Docker, Git and DBATools. None of those indications resonated quite as much as this tweet from Kevin Hill: 3 things I can no longer justify ignoring: #dbatools Git and #Docker for my dev SQL work@cl @sqldbawithbeard @Kendra_Little and @unclebiguns @GFritchey, I blame you 🤪😂There’s more but those are top 3— SQL Cyclist (@Kevin3NF) November 9, 2019 There are a million things to learn about in our rapidly shifting technological landscape, but I think this assessment, especially the way it was put, "no longer justify ignoring" really nails some of…
Yeah, Redgate is only one year younger than my children. What's really frightening is that I've been using Redgate's products since my kids were a year old. I was a VERY junior DBA twenty years ago having just made the move from full time development. I'll tell you though. I think I had Redgate's SQL Compare open on my desktop non-stop from the moment it was available. I know I personally ensured that four different organizations purchased at least one license. Now here we are, twenty years later. My kids are grown, but I'm still gleefully using Redgate Software. Yeah, I know, now I work for them, but that's just a bonus as far as I'm concerned. I've been praising and promoting Redgate for twenty years and I hope I…
I love going to SQLSaturday events because I'm always asked questions that make me think. I was just at SQLSaturday Indianapolis (a great event, if you weren't there, you missed out). I was giving a session called "Extending DevOps to SQL Server" (which I'm giving this Saturday at SQLSaturday Providence). I was talking about the fact that I've been involved in successful DevOps implementations and I've been involved in failed DevOps implementations. The question that came up was, "What were the key differences between the failed and successful organizations?" Great Question. Management Buy-In I've seen attempts to implement DevOps strictly from the IT side of things. A relatively high functioning team recognizes the benefits an agile approach that's oriented towards improved collaboration between people that uses automation in support of…
In just a couple of weeks, I'll be presenting an all day session on DevOps for databases. It takes place on Friday, August 30th. You can click here now to get signed up. I have a very hard time hiding just how excited I get about DevOps. It's not just that the technology is fun. It is. It's not just that it makes for a happier work environment. It does. It's not because by using DevOps you can deliver more quality functionality, faster, for your organization. You can. No, the reason I love DevOps, as a DBA, is because it creates added protections for my production environment and my production data. You can think of the entire DevOps process as another backup, another consistency check, one more enforced referential constraint.…