PASS Summit 2007: Day Two Notes

#SQLPass
4 Comments

Tom Casey: Keynote

Tom covered basic SQL Server marketing slides about the BI stack. Yawn. Good presenter, but couldn’t overcome the dry material. Then he handed it off to a Dell woman to announce that they’ll be selling packaged BI solutions with hardware, software, storage, etc all in one bundle. They’re targeting customers in the 1-4 terabyte range with pricing around $100k per terabyte.

I don’t understand the target customer for Dell’s packaged BI systems. I think of the decision tree like this: if a company is large enough to spend $100k per terabyte on a BI solution, aren’t they large enough to already have a preferred x86 hardware vendor, and probably a SAN? Why not just buy hardware from your existing preferred vendors? Surely you wouldn’t choose Dell because of their BI services expertise….

Russell Christopher suggests that companies will be attracted because Dell’s answered the question of what hardware configuration is required for a 1/2/4tb BI project. I’m not so sure – I haven’t seen two 1tb BI projects go the same way.

Anyway, regardless of my dim-witted analysis, this package would have been a hard sell coming out of anybody’s mouth, but it was an especially tough sell from this particular presenter. She was a nervous wreck: she kept constantly rehashing the same “I came from the dark side, not SQL Server” joke, and her slides didn’t match her speech. She would advance a slide only to say, “Oh, yeah, I already mentioned that.” Argh. Evidently Dell saves money on their marketing staff and passes that savings on to us.

Keynotes should be reserved for the very best speakers. Tom was fine, but it all went south when she picked up the mike. I bailed just before her presentation finished, and I heard bad things from other attendees as well.

Jerry Foster: Plan Cache Analysis in SQL Server 2005

In contrast to the Dell presenter, Jerry lit up the room as soon as he got started. This guy was born to teach. This was the first SQL seminar I’ve seen where people burst into spontaneous applause during one of his demos, even before he got to a conclusion.

Jerry and the crew at Plexus Online built a slick system to interpret the dynamic management views for the query plan cache. In a nutshell, his queries make it easy to see where load is coming from down to the batch and statement level, all without running a resource-expensive trace on the production servers.

About five minutes into the session, I knew I wouldn’t bother taking notes because I’d print out the slides and pore over them anyway. I downloaded his code the minute he put the URL up on the projector, and I’m going to spend a week going through it. He didn’t give out the source code for his front end, and I’m torn between building my own in classic ASP (miserable language, but I know it well enough to be dangerous) versus SSRS 2008 (which I don’t know at all, but might do well to learn.)

I’m not even going to try to touch base on everything Jerry discussed. Hard-core database engine DBAs owe it to themselves to go get his samples and pore over them.

I got chuckles out of some of the audience members’ questions, though. One of them started picking out differences between memory allocations on 32-bit versus 64-bit servers, trying to find out how much memory over 4gb his 32-bit servers could use for the plan cache. Hey, buddy, if you have to ask that question, then you need to upgrade to 64-bit. And if your database server isn’t capable of upgrading to 64-bit SQL, but you’re sitting in a seminar about caching, then you’ve got your priorities all wrong.

SQL Customer Advisory Team: Building Highly Available SQL Server Implementations

Going in, I thought this would be more technical, but it turned out to be a fairly high-level comparison of the newer HA technologies: database mirroring, peer-to-peer replication, log shipping and just a tiny bit on clustering. I didn’t learn much because I’d already researched the bejeezus out of these options, but their lessons-learned stuff bears some repeating here for people who haven’t done the homework.

Peer to peer replication and database mirroring have one good advantage over log shipping: the backup server (replication partner or mirror) can have a read-only copy of the database for query purposes. The CAT guys didn’t mention that the act of doing this means you have to pay licensing on the backup server; if you use it for purely disaster recovery reasons, you don’t have to license it.

Mirroring & log shipping should be done with Active Directory group security instead of SQL logins. Companies that frequently create SQL logins and modify their passwords will run into problems during disaster recovery, because the SQL logins aren’t synced between servers using mirroring or log shipping. If you strictly use AD groups for access permissions, then no user info is stored in the system databases, and you won’t have to worry about syncing the individual users.

Syncing SQL agent jobs, alerts, SSIS packages and maintenance plans is also a headache when doing disaster recovery planning, because those aren’t synced automatically either.

When doing database mirroring, remember that databases fail over individually, not the whole server at once. If your application uses multiple databases, you don’t want to have the failovers occur automatically, because a single database might fail over without the others, and timing would be important.

Monitor resources when mirroring more than 10 databases on an instance. That 10 number is flexible, just a rough guesstimate. (That scared me because I mirror more than 10 already.) Due to the way the mirror server handles writes, it may incur significantly higher I/O than the principal server.

In the event of a disaster, break mirroring quickly if there’s a chance the log files may fill up before the principal server comes back online.

When planning database mirroring, carefully analyze the log backup volume over time. The maintenance process of rebuilding indexes will add a lot of log volume, and you want that to happen during a very low activity window so that the mirroring logs don’t get too far behind. They’ve seen index rebuilds cause asynchronous mirroring to get over 2gb behind in less than 10 minutes.

They talked through a rather esoteric DR setup: two servers in the primary database doing synchronous database mirroring between each other, and then a third server in the disaster recovery datacenter with log shipping. That struck me as ridiculous because I’d have three possible database server names, which would be a configuration nightmare on the client side. Anyway, to get that scenario working requires manual setup and scripting, because log shipping has to be set up on both mirrored servers, and it can’t be done with the GUI.

Jason Carlson: Rich Report Design with SSRS 2008

I don’t know Jack about SSRS, but I figured I’d better sit in on this seminar after a midday conference call suggested that we might be doing it in-house.

The SSRS report design process is pretty much all new from the ground up with SSRS 2008, which makes me glad I didn’t put time into learning SSRS 2005. (Yay, procrastination!) The new design tool will be completely integrated into Visual Studio after CTP5, with a second non-VS designer with a Vista/Office 12 feel inspired by PowerPoint. The non-VS designer will support server mode (instead of just working locally), whereas the VS designer will only work when paired with an SSRS server.

Microsoft acquired the rights to a lot of Dundas chart code a few months ago. Dundas circular and linear gauges are coming in CTP6, but maps may not make it to RTM.

The chart setup is much more drag & drop than it’s been in previous releases (they say, and the crowd oohed in approval). Coders can click right in the chart to change the legend, title, locations, etc., much like Excel. As you’re doing chart setup in pop-up dialogs, the charts update in the background instantly. As a user of Excel for over ten years, I wasn’t quite as impressed as the developer members in the audience, but that’s okay – it just means I picked the right time to start poking around in SSRS.

I left about halfway through this presentation because I got some bummer news via email about a project, and wanted to do some damage control.

Tomorrow’s going to be tough – I’m stymied as to which sessions to attend. There’s some pretty good stuff out there with scheduling conflicts. I’m staying overnight on Friday, so I’ll be able to stay through the last session: troubleshooting connectivity errors. Sounds boring, but we’re having those issues at work, so I’ll be tuned in.

On to my Day Three notes.

Previous Post
PASS Summit 2007: Day One Notes
Next Post
PASS Summit 2007: Day Three Notes

4 Comments. Leave new

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.