I think Tim Ford aka SQLAgentMan was the first with this, but now it’s turned into a monster. On Twitter, we’re trading SQL putdowns. You can find ’em on search.twitter.com by searching for #SQLputdowns.
Update – Jason Massie aka StatisticsIO even did a Captain Varchar(MAX) comic about the #SQLputdowns, and there’s some funnies in the comments.
Update #2 – the day’s winding down but this meme is still going strong, and here’s my personal favorites from the chatter:
- Yo mama so dumb she separates statements with STOP. – JCumberland
- Yo mamma so ugly the server spontaneously runs an ALTER VIEW whenever she logs in. – Flagster
- Yo momma’s so dumb, she thought SET NO COUNT ON would protect her from Dracula. – abnormalus
- Yo momma’s so ugly, ain’t nobody got her connection string. – abnormalus
- Yo mama ain’t gettin an inner join without a tic-tac! – LeeBrandt
- Yo’ mamma’s so dumb she thinks SELECT * (star) is the next big reality game show. – ArcaneCode
- Yo momma’s so ugly even the full text search engine refuses to find her. – ArcaneCode
- Yo momma’s so dumb she thinks that a CHECK CONSTRAINT is why she has to pay cash at the Piggly Wiggly store – Joey96
- Yo momma’s so dumb she bought your dad autogrow instead of viagra – DPenton
- Yo momma’s feet so big, her shoe size is MAX – DamonRipper
- Yo momma’s so ugly, she breaks database mirrors – debettap
- Your momma must be an OLTP system, cause she’s getting inserts all the time! – whatduh
- You momma’s so fat she has her OWN datatype. – SQLCraftsman
- You momma is a few nodes short of a B-Tree – SQLCraftsman
- Yo mamma is so freaking that we had to put her in single user mode. – StatisticsIO
- yo mamma is so dumb, she thinks foriegn keys involve immigration – carpdeus
- Yo momma so fat, we set her clothes to autogrow – KBrianKelley
- Yo mamma wears sp_depends – JoeWebb
All over the country, DBAs are chuckling to themselves….
Inspired by Rhonda Tipton and Dwight Silverman, a couple of the local Houston bloggers I follow, I’ve decided to start posting a recap of the links I’ve found interesting during the past week. Most of ’em will be SQL Server related, but some are just things I appreciated. You’ll find those links to be useless and a drain on your time. I do not apologize for this.
SQL Server Links
Microsoft SQL Services Labs – it’s like Google Labs, only it’s for SQL Server. I wrote up an explanation of how to use the Data Mining in the Cloud with Perfmon data to help pinpoint SQL Server bottlenecks, and I look forward to playing around with more of the lab tools.
SQL Server Cheat Sheets – Donabel Santos links to several cheat sheets to help DBAs with basic syntax. Whenever Quest gives away posters with syntax for stuff like DMVs and system stored procs, DBAs go crazy asking for them, so I bet you guys will go crazy for these too.
Forcing Indexes to Become Fragmented – sounds odd, but it’s helpful if you want to test your defrag code. This came in handy not once, but twice this week when I had Quest customers running into problems with their defrag jobs.
Understanding HP EVA SAN storage – good SAN consultants cost a ***lot*** of money by the hour. They make me look like a fry guy at McDonald’s. Chris does a good job of breaking down how an EVA allocates storage, and even revisits his blog entry after getting more feedback.
Cloud Computing Links
Microsoft Azure vs Amazon, Google and VMware – fellow Questie Dmitry Sotkinov compares the cloud vendor visions. You, dear reader, probably have a real job in the real world, but those of us who work for ISVs like Quest get to focus on fun stuff like every new cloud vendor that comes out. (Well, yes, it is still real work – we have to decide when & where to deploy products in the cloud – but man, it sure feels like fun.)
Cloud Computing Incidents Database – keeps track of outages at the major cloud vendors. Instead of saying, “I think these guys are pretty reliable,” now you can go find out exactly when they’ve been down for the count.
Windows Live ID becomes an OpenID provider and then so does Google, sorta – remember the Microsoft Passport idea back a few years ago where you’d use one login to access all of your web sites? That idea rocked in theory, but because there were concerns around a single company “owning” your online identity, it never really caught on. OpenID is the new equivalent of Microsoft Passport, only it’s an open standard. You can host your own OpenID – I host mine here at BrentOzar.com. I’m looking forward to the day where I don’t have to have a zillion different logins to sites.
The Junk Drawer
HP Mini 1000 comes out for $400 – I’d love one of these, but I have two problems. One, I gotta stop spending money on gadgets that I only use for two hours per week. Two, it doesn’t have a standard VGA output. (And no, I don’t give Apple a pass on that either – my Macbook Pro has a DVI out and I hate it.) But when I break down and buy it, I’ll pick up one of these cool new netbook sleeves from ThinkGeek.
Penny Arcade brought out two new games – their new game, “Penny Arcade Adventures: On the Rain-Slick Precipice of Darkness, Episode Two” is a followup to the first episode, which I found absolutely hilarious. You can suck at gaming and still enjoy it – I certainly do. They also brought out World of Goo, a highly-anticipated followup to an award-winning demo. Watch the demo. Seriously. Amazing. Gaming is so completely different than when I was growing up. When I was young, the gamers were forced to use their imaginations. Now, the developers are really forced to do it in order to stand out from the crowd, and World of Goo does.
Twitter tools Qwitter and FriendOrFollow – two helpful tools for Twitter users. Qwitter sends you an email when someone stops following you, and FriendOrFollow tells you who’s following you that you’re not following. I don’t give a rip about my Twitter statistics, but I know other people out there do, so here you go.
Satirical post on MS SCVMM’s ISO handling – too many acronyms, I know. Here’s the deal: when you have a virtual server farm with a bunch of hosts and hundreds of guests, you sometimes need to attach ISO files to them to act as virtual CD drives to boot or to attach software. VMware can use a shared file repository and attach any ISO file to any guest instantly. Evidently (and I don’t know this firsthand) Microsoft’s System Center Virtual Machine Manager has to copy the ISO file to the same location where the guest stores its hard drives, which means more storage space used and a longer time to go live. Now that you have the background story, you might laugh at Eric Gray’s interpretation of it. I did, but I’m dry.
Porsche – I Can. – every now and then I get the urge to go build a new Porsche 911 Targa on their web site and I run across a new ad. This one’s gorgeous. I also loved the intro video on the Jaguar XF site, but it still doesn’t help the fact that the front end of the $50k Jag looks like a Buick. It looks great from behind, though. It’s like that old butterface joke….
If you want to get these links as I bookmark ’em during the week, you can subscribe to one of these RSS feeds:
You’ve heard me talk about my SSWUG video conference sessions, but you’re not sure whether it’ll work, or whether it’s worth the money? Well, Chris Shaw and the good folks at SSWUG are giving away free previews to show you how good it looks. You can watch the first ten minute of my SQLIO session for frrrrrrrreeeeeee:
Yes, I really do talk with my hands, and yes, I use an Apple Macbook Pro. The stickers are:
More stickers to come – I’m looking forward to getting some at PASS and plastering them all over the Mac. Erika hates it when I do that because the Mac does indeed look better naked, but I don’t want you guys thinking I’m a graphic designer or something.
Update 10/29 – I just got word that if you use the VIP code “BOZAVIP” when signing up, you get $10 off your registration, bringing it down to $90.
There’s a drawback, though, and I’m going to tell you about it because I believe that honesty is the best policy. If you use that signup code, I get $5. If more than ten of you use the code, my cut goes up to $10 per person. Here’s where the drawback comes in: Erika has already given me permission to spend my SSWUG money as “fun money” on my week-long Caribbean cruise in December. I might come back with alcohol-induced amnesia or a bad tattoo. So maybe it’s better for all of us if you don’t use that code. I’m just putting it out there, your choice.
This month’s Redmond Mag glows about some new features in SQL 2008, and yes, it does have a lot of cool tricks up its sleeve. But before you go upgrading your servers to get those new features, there’s one thing you need to know.
New versions of SQL Server are not always faster for every query.
This may come as a surprise to you, but every versions of SQL Server have areas where they require manual tweaking in order to be as fast as the last version. I worked with Jeff Atwood and the guys at StackOverflow this past weekend to move them onto SQL Server 2008, and we had a nasty surprise. Jeff summed up the issues with the SQL 2008 upgrade on his blog, but I’ll cover it here from a DBA perspective.
The app and SQL 2005 were on the same box (which tells you a lot about the performance of his code, because that site is doing pretty well) and they got a new box running SQL 2008. We restored the databases over to the 2008 box for testing, ran queries, and compared performance.
Full text search results were slower, but we didn’t catch just how slow they were because we focused on the queries that were currently running slow on the 2005 box. Some of the queries we didn’t test had been running great on 2005, but were dead slow on 2008.
Why slower? Different execution plans.
We did catch a second issue: on a particularly slow set of queries, SQL 2008 was building a different execution plan than 2005. This execution plan worked – but not for the kind of load StackOverflow has. I narrowed down the differences and I was able to overcome it with trace flag 2301, which lets SQL spend a little more time building a good execution plan. By spending more time compiling the plan initially, and saving that plan in cache, we got phenomenally better results. Query times went from 190ms on SQL 2005 to 40ms on SQL 2008. Hubba hubba. All systems go.
Denny Cherry, a performance tuning guru with a long history at MySpace.com and Awareness Technologies, asked me why I didn’t manually set up query execution plans for them. If it was my server and I was Jeff’s full-time employee, that’s exactly what I’d do. Problem is, if you don’t have a full-time DBA to watch the server and identify what the right (and wrong) execution plans are, you introduce an element of mystery. I can imagine what would happen three months down the road: performance would go to hell in a handbasket as schemas, queries and indexes changed over time. Jeff wouldn’t know what things were the fault of the engine, versus what things were the fault of the DBA who’d changed these settings a while back. So I had to pick a solution that wouldn’t require StackOverflow to incur a huge new payroll expense.
Went live with 2008, -T2301 killed us.
We went live with SQL 2008, rebuilt the indexes & stats, turned on the site (now hosting IIS on a separate box, mind you) and immediately the server slowed to a crawl. I figured it’d take a few minutes to get a good set of execution plans built, but the server just wasn’t recovering. Doing diagnostics on the server, I discovered that queries using sp_executesql were just knocking the server over. Ahhh-ha! Those were dynamic SQL strings, and those would probably get new execution plans built every time. The trace flag -T2301 failed us there, so we had to rip it back out.
How much would you pay to avoid a scenario like this? $19.95? $29.95? But wait, there’s more!
After ripping out the trace flag, the server stabilized around 20-30% CPU, but those numbers were too high for a weekend night. When they came up to full load during the week, the server fell over, averaging 100% CPU for a few minutes at a time. The problem query was doing a union between three full text searches, but before you scream bloody murder about a union, even running the three searches independently was taking 60-70% of the time they took when unioned together. We were screwed. The guys had to make a change to their application and cache data on the web server’s hard drive in order to sustain their normal load.
Ugh. As a DBA, that’s a failure when the app guys tell me that. This is an application that used to live fine on a single box, and now, even with SQL 2008 on its own hardware, the app guys have to work around a weakness in SQL 2008. Ouch. I take that pretty personally.
The lesson: capture a full trace before you upgrade SQL Server.
The lesson: before you upgrade, capture a full trace of your server’s load and replay it against the new version. Analyze before and after duration times and CPU numbers for both versions, and identify the list of queries that run slower. Examine how often they actually run in production, and think about how that’s going to affect your load. This was my own failure – after working with the guys at StackOverflow and seeing how tight their queries were, it seemed like the slowest queries on SQL 2005 were still in pretty good shape. Unfortunately, hidden below the surface in queries that were running in 50-75ms on SQL 2005, were some queries that ballooned to over 1 second on SQL 2008, and went much higher under load.
Furthermore, a simple trace replay still won’t give you the full picture because traces don’t throw the same amount of load at the replay server in the same time. In a web server scenario, you may have a hundred queries come in simultaneously, and you want to see exactly how the new server will be affected by that – but replaying a trace with the Microsoft native tools won’t give you that answer. For that, you need a benchmarking tool like Quest Benchmark Factory or HP Mercury LoadRunner, etc – something that can capture a load and then replay it with the same velocity and bandwidth.
Do I like SQL Server 2008? Yeah. But do I wish we could have avoided what happened this week with StackOverflow? Hell yeah.
Add these two things together:
- SQL Server’s Project Gemini is moving BI into Excel.
- The next version of Office will be available inside the browser.
If that doesn’t scream hosted-BI, I don’t know what does.
This is more fuel for Jason Massie’s belief that cloud services will kill the DBA, and I gotta tell you, it’s starting to look more and more convincing. If I was a DBA who made my living solely on SQL Server Analysis Services, I would start making a Plan B right now. SSAS won’t go away for years – there will be plenty of corporations who won’t want to host their private decision-making data in Cloud v1 – but it isn’t a rosy picture.
Now that we’ve unveiled the new SQLServerPedia.com wiki for SQL Server training, I want to talk about what makes a good wiki article.
A good article doesn’t sell anything except good techniques. When I read a magazine article, watch a TV show or sit through a presentation, I want a clear delineation between education and sales. I don’t like product placements. I can tolerate the fact that Aston Martin is paying to be in the James Bond movies, but I’ll quit watching if he ever says anything like, “Thank goodness I was driving the new Aston Martin DBS, or else they’d have gotten away.” Advertisements don’t belong inside educational content. Yes, Quest Software is paying the bills for SQLServerPedia, but this content is for education. Any sales-y looking content in articles will not be tolerated – whether it’s for Quest or any other vendor.
A good article links to other articles. Instead of just tossing out phrases like “rebuild your indexes” or “don’t put too many indexes on a table”, a good article links to other articles giving in-depth explanations from one topic to another. When we write or edit articles, I want to encourage authors to search the wiki for related articles that already exist and make sure to link to ’em when appropriate.
A good article links outside the site to great resources. Even Wikipedia knows you can’t get everything you know inside Wikipedia: most articles include a lot of links at the bottom of the page pointing to more resources. Likewise, we’ll promote the same style at SQLServerPedia, encouraging authors to link to high-quality resources.
A good article links to knowledgable people. If we’re talking about SQL Server storage, we should link to experts on the concept who frequently write about it. When I talk about storage and virtualization, I link to Scott Lowe’s blog. He’s a great writer, but he’ll probably never write for SQLServerPedia, because he doesn’t focus on SQL Servers. However, that doesn’t mean that our readers wouldn’t gain from reading his blog. More than that, we can link to experts who are on Twitter, which gives our readers a chance to see inside the minds of the people who are developing and advancing the features they use.
A good article doesn’t rehash commonly available content. If people want a walkthrough of Books Online content, there’s plenty of SQL Server sites out there already. A good SQLServerPedia article explains the reasoning behind best practices, talks about when it to do something different, and gives real-world expertise in a fun way.
A good article points to code that’s easy to reuse. When we talk about backing up databases or defragmenting indexes, we want to link to the SQLServerPedia Code Library section and put the code there. That code can live and grow under the watchful eye of Jason Massie, the Code Editor.
A good article includes screen shots when it adds real value. I’ve seen a few SQL web sites where the author puts up a screen shot after every single instructions. “Right-click on the server name, and your screen will look like this…” That’s great for people who want to learn a system they don’t have access to, but we’re writing for professionals here, not people getting college degrees by mail. We want to convey the most information possible in the least time possible.
A good article includes video and a slide deck. Now I’m just talking crazy pie-in-the-sky, but I’ll put it out there. When I’ve got a presentation that I can donate to the cause, I’ll record a copy and put the video and the slide deck in the article. If you’d like to give that presentation to your local user group, go for it. We’re all about educating the community, not keeping knowledge hidden away under baskets.
The hundreds of articles in SQLServerPedia aren’t all there yet – heck, a lot of them are a page long or less. But holy cow, I gotta tell you, it’s a lot easier to launch a wiki when you’ve got a big start, and it makes the editors’ jobs easier.
And on another note, here’s a couple of reactions to SQLServerPedia from the web:
- Denny Cherry talks about the SQLServerPedia launch
- Marlon Ribunal calls SQLServerPedia “the best community-owned SQL Server training resource.” I love the SSP badge! I’m waiting on our graphics department to produce web badges, but hey, they should piggyback off Marlon’s work.
I work for a company whose IT department has a group policy enforced so that our screens lock after 10 minutes of inactivity. That’s great in theory, but I run my corporate workstation inside VMware, which means I’m back and forth between different windows all the time. When I go away from my Windows VM for 10 minutes and come back, the screen is locked – pain in the rear.
Jason Hall of Quest pointed out the fix: a free screensaver prevention app called Caffeine. On Windows 2000 and Windows XP, it does a left-shift-up event every minute, thereby defeating screen saver lockouts. Add a shortcut to it in your Startup menu, and presto, never get locked out again.
Sometimes you need an offsite database server in case something goes wrong, but you can’t afford a full-blown disaster recovery datacenter. Or maybe you’ve got some ideas that you’d like to try out with a big SQL Server 2005 box, but you don’t have the hardware sitting around idle. Or maybe you’d just like to learn SQL Server 2005 – sure, it’s not the latest and greatest, but it’s still the most popular version out in the wild. Now you’ve got a way to accomplish this for around $1 per hour.
With Amazon EC2, DBAs can “rent” virtual servers running Windows 2003 and SQL Server 2005. In this article, I’ll explain the five big steps required to turn on your own SQL Server in Amazon’s datacenters.
Step 1: Understand what “cheap” SQL Server hosting costs.
As of this writing (10/2008), here’s the smallest SQL Server you can get at Amazon EC2, what’s known as a “Standard Large” configuration:
- 7.5 GB of memory
- 4 EC2 compute units (2 virtual cores with
- 850 GB of storage
- 64-bit Windows 2003
- SQL Server 2005 Standard
- $1.10 per hour, or roughly $800 per month
Yes, that’s a lot of money, and no, that does not include your bandwidth. You’ll also incur some additional fees for bandwidth – probably nowhere near $100/mo, but you can use the Amazon EC2 pricing calculator to estimate your numbers.
Let me put it this way: $9 for this much power for one day is a heck of a deal. I can do a lot of learning and experimenting for my $9. Even if I use it as a lab box one day per week, that’s still around $40/mo – not a bad deal, especially if I’m a junior DBA who wants a sandbox to break stuff. At $800/mo, well, we’re in Jaguar XF territory.
Step 2: Get an Amazon EC2 hosting account.
Sign up for an account at aws.amazon.com. It’s free to enroll, but you have to link it to a method of payment like your credit card or checking account. Your server usage will be deducted automatically from your account, so don’t blame me if you turn on a whole virtual datacenter and you can’t pay your rent.
After you’ve got the account set up, enable your account for EC2 (the virtual server hosting part) and S3 (where you can store your hard drives):
Step 3: Get the Elasticfox plugin for Firefox to manage your virtual EC2 SQL Servers.
Elasticfox is a browser-based way to manage your virtual army. It’s free, it’s open source, and it’s the easiest way to get started with EC2. There’s a guide on getting started with Elasticfox, but I’ll give you the highlight reel:
- Download the Elasticfox plugin and install it
- Launch Firefox and click Tools, Elasticfox.
- Click the Credentials button and input your Amazon Web Services access key and secret key. Click OK.
- Create a key pair (to encrypt your Windows admin login) by clicking the KeyPairs tab and click the button to create a new keypair. Type in anything for the key name, and save the certificate file.
- Build a set of firewall rules by clicking the Security Groups tab. In theory, you could skip this step and leave all your ports open, but come on. Click the Create Security Group button and create a group named SQL Servers.
- Click the Grant Permission button to set up a firewall rule. For testing purposes, you can leave the CIDR (source) IPs at 0.0.0.0/0, which means the entire internet. For production, you would want to restrict this access to your company’s subnet. In the Protocol Details, set up each of these:
- TCP/IP 3389 – remote desktop
- TCP/IP 1433 – SQL connections (if you want to connect to your cloud-based server using SSMS on your desktop)
Step 4: Start up a virtual EC2 SQL server.
Go back to the AMIs and Instances tab and your Elasticfox screen will look something like the below screenshot (you can click on it to enlarge). I’ve resized my columns to make it easier to see the instances I want:
In the screenshot, there’s an edit box at the right side where I typed in “sql” to help filter down the list of instances. Amazon has a ton of servers available, and you have to pay close attention to get the right one. Here’s a zoomed screenshot, and the highlighted one is the one I’ll be using:
Readthe filename carefully: the “Anon” means it’s not using the extra-charge Windows Authentication Services, and the “v1.01” is the latest version available as of this writing (10/2008). Newer versions mean newer patches of Windows and SQL, so the newer the better.
Right-click on the instance you want and click Launch Instance. The next screen is full of pitfalls.
The Instance Type must be m1.large or greater. The default is probably going to be m1.small, but that won’t work. If you try to launch a SQL Server with m1.small, you’ll get this error:
The error says:
“EC2 responded with an error for RunInstances
InvalidParameterValue: The requested instance type’s architecture (i386) does not match the architec…”
The virtual image for SQL Server is a 64-bit machine, and you have to launch it with an InstanceType of m1.large or larger. This catches me all the dang time.
For the KeyPair dropdown, choose the certificate name you generated, and under Security Groups, move SQL Server over into the “Launch In” group.
Click Launch, and if all goes well, your instance will show up in the “Your Instances” list in the bottom of the screen. It takes a minute for the server to boot, but the Elasticfox screen doesn’t update on its own – you have to push the Refresh button manually to see if the server’s available.
Step 5: Connect to your new virtual EC2 SQL Server.
When the server’s State shows “running”, right-click on it and click Get Administrator Password. Elasticfox will ask for the key pair certificate file that we created earlier. I’ve had problems with it not always recognizing the file, so just try again and it’ll probably work. The administrator password will be saved to your clipboard. Windows doesn’t always allow pasting into the password field, so you may need to bring up Notepad, paste the password in there, and then look at that Notepad screen while you’re logging in.
Click on the server and click the Connect button in Elasticfox. Elasticfox starts the Remote Desktop client and directs it to the server’s public DNS name, which is going to be something completely forgettable. Don’t worry – if you’re planning to use this server for disaster recovery, you can assign it a permanent IP address and a better DNS name, and there’s plenty of instructions for that in the Amazon documentation.
When you start SQL Server Management Studio, you’ll either have to put in (local) for the server name to connect to, or start the SQL Server Browser service.
Before you create databases, go into Windows Explorer and take a look at your hard drive configuration:
In this screenshot, I’ve got two local drives, D and E, each with 420 GB. Cha-ching!
From here, the world is your oyster. You could set up database mirroring, and use this as a disaster recovery server. Be aware that SQL 2005’s database mirroring is not compressed, so your bandwidth charges may be higher. Instead, I’d suggest doing log shipping. The advantage to using log shipping is that you can compress it with Quest LiteSpeed, plus you don’t necessarily have to be running the SQL Server at all times. You can copy the files to a cheap non-SQL box at Amazon, and only start up the SQL Server once per day (or per week!) to apply the log files. (I see a blog post coming on that after PASS when things die down.)
Another great use: testing software. The whole reason I wrote this article today was that I had to test a new beta of Quest Change Director, and I needed a quick new environment to test it in.
I’ve got an upcoming project where I’m working with a European client on a SharePoint whitepaper, and both of us work for secure public companies with paranoid IT departments. Neither company wants to give VPN access to the others’ staff, so instead we can just build our lab in Amazon EC2 from scratch and both access it from anywhere on the planet. Everybody wins.
Just don’t forget to shut the server down when you’re done with it, or else you’ll keep paying by the hour!
I’ve mentioned Jeff Atwood of CodingHorror.com a few times over the years here, and it bears repeating: he writes a great blog for developers, and DBAs need to read it too. I also follow him on Twitter, and a couple of weeks ago he mentioned he was having problems deciphering execution plans:
Holy cow – I could actually help the world’s most dangerous programming blogger with something! I fired off a Tweet and started helping him read execution plans. One thing led to another, and next thing I know, I’m tuning StackOverflow‘s SQL Server for him.
Now, I’ve done a fair bit of performance tuning, but I should have known that tuning is a little different in Jeff’s world. The easiest way to explain it is by relaying the first thing out of somebody’s mouth when we start performance tuning:
- Data warehouse manager – “The nightly loads are taking 6-7 hours a night, and we need to get that number down.”
- Web site manager – “Our queries are timing out after 30-60 seconds.”
- Jeff Atwood – “This query is taking 150ms, and I want it faster.”
And I should mention that Jeff’s written blog entries like:
- Who Needs Stored Procedures, Anyway? (and I would be remiss if I didn’t point out that he uses Quest Toad in that blog entry, baby!)
- My Database Is A Web Service (which contains this priceless quote: “Database performance is almost always the bottleneck anyway.”)
Nooo pressure. No pressure.
But wait, there’s more. Usually, when I go into a shop that’s never had a DBA, the server is a mess. Tables, views, field naming conventions and formats all over the place, no consistency, nobody knows if anything’s actually in use, etc. Not here. The schema on this thing was tighter than the Pope’s poop chute, as they say. (Really, they do. “They” being my parents.)
There is almost no low-hanging fruit here. Well, I mean, there’s a little, but we’re not talking big fruit. Berries. And they’re eight feet up. I got all the way down to comparing specific query plans on 2005 vs 2008 to find out why exactly one table was in the wrong join order on the execution plan to save 80ms on a query. I fixed it with SQL Server’s trace flag 2301 to get it to spend more time building the execution plan, but when we went live, the server couldn’t handle the load on queries using sp_executesql, so I had to rip that back out. Dammit, I want my 80ms back, and I gotta figure out a way to get it.
Update 10/30 – it was in fact slower, but it wasn’t me. I blogged about the problems we’re having with SQL Server 2008 full text search performance.
Last Monday, I talked about the problems with SQL Server training as it existed last week.
Starting today, things are changing.
Today, you can get a sneak peek at the new SQLServerPedia, but I’m not going to give you the link until the end of the post because I want you to read what – HEY – WAIT – QUIT SCROLLING DOWN! Finish reading this, you cheating cheater.
Okay, where was I? Oh yeah, I want you to read what’s different about this site.
Real experts “own” each section and put their name on it.
Each topic area of the site, like Performance Tuning or Architecture & Configuration, has a name and a face. That person is responsible for the accuracy and quality of the content in that section. These people aren’t sitting around copy/pasting content from Books Online like some other SQL blogs I’ve seen out there – these are real SQL Server experts with real jobs and real experience solving really big problems. This isn’t “BigDaddy31” or “TriggerzRool” on Experts Exchange – every one of our experts has at least 10 years experience with SQL Server.
If you’ve got something to say about a page, you can say it.
Every page in the wiki has a “Discussion” tab where you can give your feedback straight to the editors – and to other readers. You can talk about your experiences with a particular topic, suggest things the editor should do to enhance the page, or trash talk, I suppose. But if I was you and I had something to say, I’d read the next feature…
If you want to enhance a page, you can do that too.
If you want to add content to a page, just create an account on SQLServerPedia.com and click the Edit button on a page. Add your information. Hit Save. It’s just that easy. An editor will moderate your changes, make any improvements they want, and then approve your content to go live. That’s what makes us different – real people are double-checking the answers before they go live, because we don’t want a bad solution misleading a lot of DBAs. When you see something in the SQLServerPedia wiki, you’re going to know that it’s passed the BS test: all of the editors have been working with SQL Server for at least a decade, and they know what’s the right thing to do and what to avoid.
We’re hosting video and audio podcasts of SQL Server training presentations.
You can subscribe to our SQL Server podcasts using your iPod or Zune, or you can just watch streaming versions in your web browser. We’ve got video and audio copies of lots of technical presentations that we’re starting to push through our podcast feed, with new ones coming out twice a week. If you’ve got a presentation that you’d like to record, let me know and I’d love to help you record it and host it. There’s so many DBAs that are desperate for training and knowledge, and this is our chance to make a difference.
Everything is free – no registration required.
That’s right, free SQL Server training. Why should you have to pay to access the work of the community? If you want to join in and help the wiki, then you’ll need to register, but as long as you’re reading, you can keep your tin foil hat on your head and your credit card in your pocket.
Here’s to the Editors, the people making a difference.
These are the people who are pushing the buttons to make it happen: the editors at the new SQLServerPedia.com. These are seriously experienced database experts who are taking the time out of their lives to help build community knowledge. None of these guys are getting paid for this – they all volunteered because they believe in giving back to the community. I can’t emphasize enough how thankful I personally am to these guys, and how thankful our readers should be. They’re helping us all become better DBAs, and nobody’s charging admission.
And here’s to the people who helped get it off the ground.
We couldn’t do this without the dedicated contributions of Andy Grant and Christian Hasker of the Quest Software SQL Server team. Andy and Christian pulled strings to get the entire SQL Server training content from our former product KnowledgeXpert ported over into the wiki. If you wanted this content before, you had to pay to get it, and now Quest is giving it away for free in the wiki. That’s the kind of commitment Quest has to getting DBAs trained, and that’s why I came to work for them. The SQL Server business unit is about one thing: making DBAs look good, and that’s where the new SQLServerPedia comes in.
Okay, okay, here’s the link.
To get a sneak peek, go to http://SQLServerPedia.com/wiki. We’re doing a soft launch right now, and a full launch at PASS – at that point, the main pages on SQLServerPedia.com will have the same top navigation bar as the wiki does right now.
One thing you need to know: during the soft launch, the “Search” box at the top right only shows results from the blog, not the wiki. That’s on purpose – we don’t want to be in Google’s indexes just yet. We’re giving the editors time to poke around in their content, and we’re working with a search engine team to optimize our HTML.
Every day this week, I’ll blog about a particular area of the site that stands out for me – something we’re doing that takes training to the next level. Tomorrow, I’ll be talking about our vision of what a good wiki article means.
Go check out the site and let me know what you think!