I took part in the PASS Summit 2014 selection committee this year because I was really curious about seeing how the sausage gets made. I’ve seen how actual sausage gets made and I still eat sausage. Despite a few hiccups and communication issues, internal and external, I think the selection process for the Summit went really well this year. But, there was still some controversy. Being a naturally pushy person, I got involved in the controversy, for good or ill, and subsequently have had conversations with many people about the selection process (which, reiterating, I think went extremely well overall). But, the one thing that kept coming up over and over was a simple question:
How come I/PersonX didn’t get picked?
The easy answer is because you/PersonX had a horrible abstract. But you know what, in probably most cases, that’s not true. Good abstracts by good people didn’t get selected, so what the heck? I think the more complex answer does not go back to the selection committee or the selection criteria or the selection process. Do I think some improvements are possible there? Yes, and I’m putting my foot where my mouth is (or something) and joining the committees to try to make some tweaks to the system to make it better (and really, we need tweaks, I want to repeat, risking ad naseum, the process went well and worked great and I’m happy I took part and I think the outcome is pretty darned good). No, the real problem lies elsewhere, SQL Saturdays.
I’m not saying SQL Saturdays are themselves a problem. What I’m saying is that PASS took on the whole SQL Saturday concept for several reasons, one of which was for it to act as a farm team for speakers. This will be my 10th Summit. Looking back to 10 years ago, while I really loved the event, oh good god have the speakers improved. I remember sitting in sessions with people who were mumbling through their presentations so much that, even with a microphone, you couldn’t hear half of what they said. Slide decks that consisted of 8-12 pages of text (yes, worse than Paul Randal’s slides, kidding, don’t hit me Paul). Speakers who really, clearly, didn’t have a clue what they were talking about. It was kind of rocky back then. I learned my second year that you had to talk to people to find out, not just which sessions sounded good, but which speakers were going to present those sessions well enough that it would be worthwhile. Why were there so many weak presenters? Well, because there was almost nothing between speaking at local user groups and speaking at Summit (I made the leap that way). There were a few code camps around, a couple of other major events, various schools and technical courses, and Summit. I don’t know how the old abstract/speaker review process worked (and I apologize to whoever read my first abstract because I know now just how horrific it was and I’m so sorry I wasted your time), but I’m pretty sure they were desperate to get enough submissions that sounded coherent with a speaker attached that probably could get the job done. Not any more.
Now, people are getting lots of opportunities to present at SQL Saturday events all over the world. And it’s working. We’re growing speakers. We’re growing good speakers. Don’t believe me? Then you go to two or three events in a month, sit through 8-12 sessions, mostly by newer people, not Brent Ozar, not Denny Cherry, not Kim Tripp, and you review them, each, individually, then go back and try to pick the best one. Oh yeah, there’s going to be a few dogs in the bunch, but overall, you’re going to find a great bunch of presentations by a great bunch of speakers. Our farm system is working and working well. But there’s a catch.
Because we have upped the bar pretty radically on all the introductory level speakers (and if you’re thinking about presenting, don’t let that slow you down, everyone starts at zero and goes up), that means the competition at the top (and yes, I do consider the Summit the top in many ways, not all, see SQLBits) is becoming and more and more fierce. That means, my abstracts probably need quite a bit more polish than they’re getting (and so do yours) because there are a whole slew of speakers coming up that are writing killer abstracts. That means I need to really be concerned about the evaluations (despite the fact that I get dinged because the stage is low, the room is hot/cold, lunch didn’t have good vegetarian choices, England left the Cup early, all outside my control) because there are new speakers that are knocking it out of the park. In short, you/I/PersonX didn’t get picked because the competition has heated up in a major way.
In short, a sub-section of the community, defined by those who wish to speak, are victims of the success of the farm team system as represented by SQL Saturday. On the one hand, that sucks because I now need to work harder than ever on my abstracts, on the other, we’re going to see very few instances of really bad presentations at Summit. We’ve improved the brand and the community. It’s a good thing.
I feel like this was further evidenced by the scores from last years PASS Summit–a good 90% of speaker scores for the event were 3.9 or higher. Which shows very few dogs (and yes I know there is bias that tends to escalate them higher)
One thing I was really surprised about was that the SQLSaturday feedback doesn’t persist – I’m given my reviews at the end of the session… and that’s it. While, granted, a lot of them are “55555” with no feedback, I have to wonder if that could somehow be used in addition with the abstract to find the up-and-coming speakers. “X has spoken Y times and has Z reviews with an overall N”.
Being on the oh-god-I-hope-they-pick-me side of things, maybe that’s not a terribly helpful idea. But it seems odd that we’re willing to throw away useful data in this case. Personally, I’d have already built a script to automatically archive it to tape. : )
While feedback is absolutely a gift… the gift we get from the currently defined SQL Saturday feedback forms is quite small indeed. One could even say vanishingly small to the point of invisibility. I would hate to see them used as any kind of measure the way they’re currently configured. That said, one would think the farm team would need to keep records in order to differentiate between the good and bad players.
How does experience in the farm teams help a speaker in a blind abstract selection process? Maybe I’m mistaken, but I thought the folks evaluating abstracts didn’t know who submitted them. Next weekend will be my 20th SQL Saturday as a speaker. I’ve grown a lot, I’ve done some awful talks and some great ones. I’m not convinced any of that counts for much when I submit sessions to the summit. Could my abstracts be better? yes. It seems unlikely that every big name speaker writes flawless abstracts, yet it’s very rare to hear of the big names not getting at least one regular session (sometimes more than one).
Note: In all honesty, I know I could improve as a speaker so while it does sting a bit to be passed over I’m cool with it. I worry that the process favors big name and/or past Summit speakers and new speakers have less chance of getting picked each year.
Hey Tim,
God knows, I may not have a clue, but I was thinking that having to submit more and more abstracts would help to polish them before submitting them for Summit. Also, speakers go through an evaluation as well as the abstracts (independent of each other). So all that speaking at SQL Saturdays does have an impact (although maybe not as much as people hope for). But, this is an opinion, not a statement of absolute fact.