The hardest part about implementing DevOps is not the tools you choose, but the processes you use to make DevOps work. That said, you do need to think about the tools you’re going to use to automate those processes. Frequently the emphasis is on third party tools, but it doesn’t always have to be. Microsoft’s Visual Studio has a number of tools that you can use to automate your DevOps methods.
Visual Studio Team Services
Connecting a project into Team Services opens up the world of DevOps pretty handily. You can host this all locally and do an install to a server to support it. With more and more of us working with teams that span continents and oceans, it probably makes more sense to use the online version. There’s a lot less to install (agents on some machines to manage processes, but that’s about it). You’ll still have to do all the work of setting up projects, teams, workflows, etc. In short, the hard work is still there. You just avoid having to do software installs & updates. Service oriented team management and development surely does have a lot to recommend it.
Set up Source control easily through GitHub. You can also use the older TFS source control. Personally, I’ve joined the cool kids (admittedly, late) and now use Git.
Team
You manage the team and work assignments through the built-in team management system. This includes bug tracking and other types of requests. It actually makes it possible to treat your DevOps system as a full circle. You log issues from Production back into the Dev management system. These changes are then implemented and deployed using the same process you’d use for new or modified functionality.
Builds
Automate your builds pulling from the appropriate place in source control. Include unit testing as necessary. Add additional steps to your build process to support things like backups or security management.
Deployment
Deployments can be set up to fire manually or through automated fashion across multiple different environments. You can customize these environments for your own process. So, setting up a continuous integration environment that does a deployment each time a successful build concludes is easy. All this is managed through the same set of interfaces (automated through Powershell). You can also schedule deployments to test systems or whatever else you need. You get to configure the lot with parameters and as many steps as are necessary to get your code out the door.
Tests
Finally, you can build out multiple different testing schemes so that you run different types of tests in different environments to help automate the whole process.
And 3rd Party Tools
If all that’s not enough, add from an assortment of third party tools to support your processes. I especially recommend you examine the third party products in support of database development, ReadyRoll & SQL Source Control (migrations-based and state-based deployments) from Redgate. You’ll also find plug-ins for a lot of the standard CI & Release Management tools such as TeamCity and Octopus so you can integrate your processes in different ways as needed.
Just found out about one more tool that might prove useful here. It’s called Papertrail. It’s basically a logging aggregation tool. For a CI & Release process where multiple servers and multiple processes are spitting out logs, you might want something that pulls them together for consumption.
Conclusion
Starting down the DevOps path, you should be focused first on your processes. However, when you’re ready to look at tools to automate those processes, you can easily stick to Visual Studio to get the job done. Just remember that the most important aspect of DevOps is communication and you’ll find that Team Services will help you facilitate that communication nicely.
We’re just starting down the DevOps path. As production DBA, I’m actually part of our production engineering team ( network/server etc ) and report to that manager who is our DevOps “evangelist.” Have I missed an article that talked about the role of product engineering and how QA/Development can work with PE and move beyond the traditional antagonist roles?
What I’m finding so far is that melding the two is resulting in big hurdles for me in moving database changes to production. We always had change control, but now I have many more hurdles and many more weeks/months before even needed, performance-related changes can be delivered to production ( find which GUI menu option results in the query needing work/index changes, present to Architecture Review and get passed, build/test, JIRA ticket, add to accurev or Github, promote in accurev to release stream and hope more testing occurs that is actually related to your DB changes, pray, take a nap, change goes into release to prod or ran separately by DBA if long-running.)
There are a bunch of others posting on this topic today and may have a few answers to your question. You can see most of the posts here: https://www.scarydba.com/2017/06/06/t-sql-tuesday-091-databases-devops/
I’ll be putting together a wrap-up blog post with everything in a single page on Thursday.
I’ve also written a lot about this topic over at Redgate for the book we put together on database lifecycle management. Some of that could help (and it’s free to download): https://www.simple-talk.com/collections/database-lifecycle-management-achieving-continuous-delivery-for-databases/
[…] Visual Studio and DevOps – I talked about tools just because I so frequently talk about people & process when it comes to DevOps that I thought I’d mix it up a little. […]