When I work with clients, the thing I hear the most is:
“WOW! That’s really cool – where’d you get that?”
While sometimes they’re talking about my 128GB USB3 thumb drive or my PlugBug Mac power cord, most of the time they’re talking about SQL Server goodies like completely free books, T-SQL cleanup and tuning tools, or ways to decrypt encrypted stored procedures.
The SQL Server community has an overwhelming amount of free stuff, but…well, let’s be honest. A lot of it isn’t very good, and even some of the good stuff has been abandoned by the author or has huge hidden drawbacks.
For the newest entry in our SQL Server training videos, I decided to highlight the best freebies in much more detail. I don’t just tell you where to download them – I show you how to use them the way I use them every day to manage database servers.
Buy The Best Free SQL Server Downloads and use coupon code BLOGREADER by Friday, August 2nd, and you’ll get 1/3 off – bringing it to under $19. Enjoy!
Recently, Jes asked the team an index tuning question: “If a query has an index hint in it, will the optimizer ever suggest a missing index for that query?”
I immediately loved the question because I’d never really thought about it before. I typically think of index hints as being a very risky game and avoid them whenever I can– after all if someone drops the index you’ve hinted, any query hinting a non-existent index will start to fail. (That’s a really bad day!)
Even so, some people love index hints. And some folks inherit code that’s already littered with index hints, and they may not be able to change it. In those cases, are the missing index requests in SQL Server DMVs going to be skewed by all those hints?
Testing missing index requests
There’s a great Books Online page documenting limitations of the missing index feature, but I’m a huge fan of getting my hands dirty and testing these things myself. I powered up a virtual machine with SQL Server 2012 and the AdventureWorks2012 sample database to take a look.
First, I ran a simple query that I knew would generate a missing index request:
SELECT FirstName FROM Person.Person WHERE FirstName like 'S%' GO
Sure enough, I got a missing index request. SQL Server scanned an existing nonclustered index to satisfy my query, but pointed out that a new nonclustered index just on the FirstName column would be just perfect for that query.
I wanted to preserve the exact plan it had selected, so I reran the query with an index hint– however I hinted the exact nonclustered index that it had decided to scan anyway:
SELECT FirstName FROM Person.Person WITH (INDEX([IX_Person_LastName_FirstName_MiddleName])) WHERE FirstName like 'S%' GO
And look at that! No missing index request on that second query. The index hint “broke” the missing index feature. SQL Server didn’t point out that an index just on FirstName would help the query out, even though it was using the exact same execution plan it used before where it DID point out that the plan wasn’t perfect.
Not so fast, index tuner
At this point, I was pretty excited. What an interesting point! I’ve never read about this before. What a delicious geeky snack this is! Then I came back to the real world and thought, “Well, it’s kind of odd that I haven’t read about this. I should look at those published limitations again.”
The Missing Index feature in SQL Server is pretty complicated. There’s a few things that have burned me a couple of times, so I’ve learned to use it pretty carefully. As soon as I looked at the list of limitations, I had a pretty good idea I might have just been fooled.
Let’s go back and look at the queries we ran before in a different way. If we take a closer look at the first query’s execution plan (this was the query without an index hint that generated a missing index request), we can see that it received “FULL” optimization:
If we do the same thing for the second query (it had an index hint and did NOT generate a missing index warning), we see something different:
When we added the index hint, we did something subtle: we simplified the process of optimization for SQL Server. We gave it fewer choices– in fact, given the query there was only ONE choice for optimization. Since it only had one choice, it did a “TRIVIAL” optimization, which is more lightweight.
One side effect of trivial optimized plan is that they don’t generate missing index requests. This is one of those documented limitations we talked about earlier.
So in this case, using an index hint didn’t generate a missing index request– but that was a side effect of the change it made to the optimization process.
The second test: A more complex query
To find out if this holds true for queries using full optimization with index hints, I just changed my query up a little bit.
Here’s a little trick: It’s not hard to write a difficult-to-optimize query in AdventureWorks. All you have to do is use one of the built-in views. (Sorry, views, you just often make things messy. It’s just the truth.)
Our new query for testing is:
SELECT FirstName FROM [HumanResources].[vEmployee] WITH (index([IX_Person_LastName_FirstName_MiddleName])) WHERE FirstName like 'S%' GO
This query gets “FULL” optimization. And it generates the missing index request for the Person.Person table, even though we’ve used an index hint on that table.
Missing indexes are complicated, index hints are risky
To sum up: using an index hint won’t necessarily prevent you from getting a missing index request. In some cases it may change a query that was getting “FULL” optimization to “TRIVIAL” optimization, in which case the missing index request won’t be generated, but that also holds true for all queries getting “TRIVIAL” optimization. (Plans getting trivial optimization have some other quirks, by the way. One notable one is that they won’t consider indexed views for execution, either.)
The bigger point is that the missing index request feature in SQL Server is very helpful, but it’s not perfect. It gives you some of the “big picture”, broad stroke requests, but it really just gives you a first direction to look in.
In order to tune a workload, you also need to know a lot about your queries that run. You may have super frequently run queries that get a “TRIVIAL” optimization because the queries are simple– whether or not those queries have an index hint. If the queries are run frequently enough it can absolutely be beneficial to tune your indexes for them, even if they never make an appearance in the missing index DMVs.
I really enjoyed looking into this question because it reminded me that all changes can have unexpected side-effects. Maybe a hint makes a query faster today, but who’s to say that it’s the best optimization decision for all time, and what else you may impact along the way?
Every release lately, Microsoft has been turning the screws on Standard Edition users. We get less CPU power, less memory, and few (if any) new features.
According to Microsoft, if you want to use more than $500 worth of memory in your server, you have to step up to Enterprise Edition. Seriously? Standard Edition licensing costs about $2,000 per CPU core, but it can only access 64GB of memory? That’s ridiculous.
SQL Server New Features that Aren’t In Standard Edition
Take just a quick glance at the SQL Server 2014 edition feature grid and you might be shocked at what Standard Edition doesn’t allow:
- Database snapshots (a huge lifesaver when doing deployments)
- Online reindexing, parallel index operations (wouldn’t you like to use more than one core?)
- Transparent database encryption (because only enterprises store personally identifiable data or sell stuff online, right?)
- Auditing (guess only enterprises need compliance)
- Tons of BI features (because hey, your small business doesn’t have intelligence)
- Any non-deprecated high availability feature (no AlwaysOn Availability Groups – you get database mirroring, but that’s marked for death)
Will We Get New Pricing and Licensing by the Release Date?
Every now and then, I hear managers and DBAs react with shock about how limited Standard is, and how much Enterprise Edition costs – $7,000 per CPU core.
Sometimes they even say, “That’s ludicrous! If I was Microsoft, there’s no way I would do it that way. And we’ve got really savvy developers – I bet we could even write a database engine that could do most of what we need.”
Okay, big shot. Time to put your money where your mouth is.
The world is full of open source databases that are really good. You’re not the only ones frustrated with what Microsoft’s done to SQL Server licensing, and there’s vibrant developer communities hard at work building and improving database servers.
What’s that, you say? You’re too busy? You’d rather keep paying support on your current SQL Server, and keep working on incremental performance improvements to your code and indexes?
Yep, that’s what I thought.
Microsoft won’t change its tack on SQL Server licensing until you start leaving. Therefore, I need you to stop using SQL Server so they’ll start making it better. You know, for me.
Download the SQL Server 2014 Trial for Free
If you’d like to play with Hekaton, clustered column store indexes, or the other new features in 2014, now’s your chance. You can download the trial edition for free, but just keep in mind that we have absolutely no idea what features will be included in each edition when the release date comes.
sp_BlitzIndex™ just can’t keep a secret. That’s a feature, not a bug. I wrote sp_BlitzIndex™ specifically so that it would rat out some things that SQL Server is doing behind the scenes to secretly help you out– because it’s really better if you know about it.
What are secret columns?
SQL Server needs a unique way to navigate to every row in an index. It also needs to maintain a relationship between every nonclustered index and its base table. In order to accomplish these two tasks, SQL Server always makes sure to add one or more columns to a nonclustered index. If you don’t add the columns yourself, it goes ahead and does it secretly. (“Secret columns” is a term I made up, because a term didn’t exist yet.)
In most cases, people create nonclustered indexes on tables that have clustered indexes. When this happens, SQL Server will make sure that all of the columns from the clustering key of the table are ALSO in the nonclustered index. Exactly how it happens is a little complicated. Here’s a quick summary of the rules involved:
- If the nonclustered index is NOT unique, SQL Server makes sure that the columns from the clustering key make it unique behind the scenes
- This is done by promoting the clustered key columns to the key of the nonclustered index (otherwise they can be “included” columns)
- If the clustered index columns are NOT unique, an additional secret column called a “uniquifier” will have already been added to it– and that sneaks into the nonclustered index too
This made my head spin around the first time I tried to sort it all out. Please don’t try to memorize that!
sp_BlitzIndex™ helps make this easier
I never expect anyone to remember this stuff. I built sp_BlitzIndex so that it would give you the information you need about what’s going on in a specific index without having to memorize a whole lot of rules.
Let’s take a look at a specific example. The AdventureWorks sample database has a table named Production.WorkOrderRouting. We can take a look at the indexes, missing indexes, columns, and foreign keys on this table in sp_BlitzIndex™ by running the command:
exec sp_BlitzIndex @database_name='AdventureWorks', @schema_name='Production', @table_name='WorkOrderRouting'
This shows us that the table has two indexes. One of these is the clustered index:
The definition indicates this with the code “[CX]“. It’s also a Primary Key, indicated by the “[PK]“. Primary keys are always unique in SQL Server, so this is a Clustered PK (which is by definition unique). Because this is the clustered index we know that it will contain EVERY column in the index all of the data. The data is sorted by three key columns: WorkOrderID, ProductID, OperationSequence.
Where are the secret columns?
In this case, the secret columns are all in the nonclustered index. The nonclustered index looks very narrow and simple at first. It has only one key column, ProductID:
Scrolling over, sp_BlitzIndex™ claims that things aren’t so simple. There are three secret columns in this nonclustered index, and they have been put in its key:
Those are the exact three columns from our clustered index! (WorkOrderID, ProductID, OperationSequence)
Exactly how many columns are in this nonclustered index?
You may have noticed that the ProductID column appears twice in sp_BlitzIndex™ for this nonclustered index. It was defined as a key column, but now we’re saying that it’s a secret column as well.
Question: Does that mean the column is stored twice? No, the column is NOT stored twice. The effective definition of this nonclustered index is that it has THREE columns in it (not just one, as the creator might have thought). The columns are in the following order: ProductID, WorkOrderID, OperationSequence
SQL Server will only ever add columns from the clustering key to an index if you have not specified them in the index. It will never have to store a column twice. I repeat, there is no penalty if you list a column from the clustered index in your nonclustered index. (Yay!)
To determine the columns that will be in an index, look first at the index definition. Then add in the “secret” columns which haven’t already been included.
Why doesn’t sp_BlitzIndex™ remove “duplicate” secret columns?
Well, it’s complicated. I made this design choice for a few reasons:
- I wanted to preserve the definition as the index was created
- The definition may state a column as an include that is made into a key column by SQL Server secretly. The easiest way to show this is to show both
- I wanted to show the secret columns on their own so it would be more clear that they will stay in the nonclustered index even if you remove them from the stated definition
A more complex example
Maybe you’re not sure if I’m telling the truth about these secret columns. The good news is, I can prove it. First, let’s look at a more complex example.
We’re sticking with the Production.WorkOrderRouting table. What happens if we add a more complex nonclustered index? This new nonclustered index has two different columns from the clustering key — one as a key column, and one as an included column. I create the new nonclustered index with this command:
CREATE INDEX ixtest ON [Production].[WorkOrderRouting] ( PlannedCost, ProductID, ActualCost) INCLUDE (ActualEndDate, OperationSequence) GO
sp_BlitzIndex™ shows that the new index is assigned IndexID=4 for the table.
Definition: [3 KEYS] PlannedCost, ProductID, ActualCost
[2 INCLUDES] ActualEndDate, OperationSequence
Secret Columns: [3 KEYS] WorkOrderID, ProductID, OperationSequence
There’s some overlap here. To interpret this, we know:
- The key columns in the user definition will be present in the order specified
- Secret columns are being added to the key of the nonclustered index in this case. So any secret column that was listed as an “included” column of the user definition will be promoted by SQL Server.
Based on the rules, the functional definition of this index is:
5 Keys = PlannedCost, ProductID, ActualCost, WorkOrderID, OperationSequence
1 Include = ActualEndDate
We effectively had WorkOrderID added to the key (it wasn’t listed), and OperationSequence promoted to the key from the include.
Let’s prove it!
To prove it we’re going to use a handy new DMV in SQL Server 2012 and the built in DBCC PAGE command.
These are both undocumented tools. Please do not run this in production. Seriously, I’m not kidding. There are bugs with this stuff sometimes, OK? Why risk it? This is suitable for AdventureWorks on a test instance far from production, your boss, and your resume.
First, I’m going to find a single page from the leaf level of this new nonclustered index. I love the new sys.dm_db_database_page_allocations DMV because it makes this super easy, as long as I know that my new index has ID=4 (I got that from sp_BlitzIndex™ earlier):
SELECT top 1 allocated_page_page_id, allocated_page_file_id, page_type_desc, page_level, next_page_page_id, previous_page_page_id, is_allocated, is_mixed_page_allocation FROM sys.dm_db_database_page_allocations (DB_ID('AdventureWorks'), /*database-id*/ OBJECT_ID('Production.WorkOrderRouting'), /*object_id */ 4, /*index id */ NULL, /*partition id */ 'detailed' /*mode*/) WHERE page_type=2 /* index page*/ and page_level=0 /*leaf of the index */ ORDER BY page_level DESC, next_page_page_id DESC; GO
Next, I have SQL Server dump the contents of that index page to my screen. I do that by using this (not officially documented) commands:
--This trace flag prints DBCC PAGE data to our screen. --Otherwise it'll just go to the error log. DBCC TRACEON (3604); GO --Plug in the page --I'm looking in file #1 --Page 24882 --Dump style = 3 DBCC PAGE('AdventureWorks',1,24882,3) GO
And VOILA! I get back a dump of all sorts of information about what’s on that single page of data– including exactly what’s being used as key and includes for that index!
Were we right?
Sure enough we can validate our theory. The “secret columns” put a new column in our key and promoted one column from the include.
The good news is the order of the keys we specified was preserved, and no column had to be stored twice. That’s awesome!
The bad news is that SQL Server didn’t tell us about this on our own. We had to use a special tool to figure it out. And while this is somewhat of a party trick, it’s probably not something you can trot out to fascinate the crowd at your next family reunion.
Frequently asked questions
I’m overwhelmed! What if I don’t get this? Breathe deep, I promise it’s OK. If you’re just getting started, that’s understandable. Just remember that the columns in “secret columns” will be added to the nonclustered index if they aren’t already there. That’s all you really need to get going.
Is it good to have all those columns in the clustering key? Not necessarily. You want the clustering key to be narrow– having a lot of columns or even a few large columns in it can clearly bloat all your indexes on the table and cause lots of problems. Before you get too comfy, remember it’s not always bad to have more than one narrow column in a clustering key, particularly if they create a unique, static key that is useful for range queries and has a healthy write pattern.
Isn’t it actually more complicated than this?
Isn’t this complicated enough? Sure, “Row Identifiers” (“RIDs”) come into play for heaps and there’s those pesky “uniquifiers”. That’s a bit much to explain in one post. (If they impact you, sp_BlitzIndex™ will show you those, too.)
How do I get sp_BlitzIndex™? Download sp_BlitzIndex™ here.
I recently gave two webcasts on database mirroring – Mirror Mirror on the Server, Who is the Principal of Us All? and Mirror Mirror on the Server, Who is the Mirror of Us All? - which generated a lot of great discussions!
There are a few topics that have come up over and over again, so here are my answers to your database mirroring frequently asked questions!
- What versions of SQL Server is database mirroring available in?
- SQL Server 2005, 2008, 2008R2, and 2012
- What editions of SQL Server is high safety (synchronous) available in?
- What editions of SQL Server is high performance (asynchronous) available in?
- Do the principal and the mirror need to be on the same version?
- You can have the principal be one version – say 2008R2 – and the mirror another – say 2012. This is how an upgrade with minimal downtime can be accomplished using mirroring! However, once you fail over to the mirror, you can no longer fail back.
- Do the principal and the mirror need to be on the same edition?
- To be fully supported by Microsoft, yes.
- Can multiple databases on the same instance be in mirroring sessions?
- How do I mirror the system database (master, model, msdb, tempdb)?
- You can’t! This isn’t supported.
- If I set up mirroring, do I still have to take full, differential, or transaction log backups?
- Yes! Mirroring increases the availability of your databases. It is not a substitute for regular backups, however.
- How does index maintenance (rebuilds) affect the mirror?
- Transactions that fill up the log can affect the performance of mirroring. When the amount of information in the log increases, the amount of information that needs to be sent to and committed to the mirror increases also. If performance is crucial, you may want to do index maintenance more frequently, so it takes less time. If large transactions such as batch inserts are affecting performance, break those into smaller transactions.
What other questions do you have?
As businesses grow, many teams start considering scale out database solutions but they’re cautious – they want to know how to keep data up to date in multiple places. In theory, you can use log shipping, mirroring, or AlwaysOn Availability groups. While this answers the literal question, what people really mean is usually very different. The people who ask this question aren’t typically concerned about creating a read-only replica of Site A over at Site B, they’re interested in being able to write data anywhere and read data back from anywhere. They want a database that lets them read or write data on any server transparently.
SQL Server Bi-Directional Transactional Replication
First, this has more syllables than any feature has a right to have. Second, this feature is limited to two servers. It’s relatively simple and seems like a good enough option, especially since it’s really just two transactional replicas that point at each other. The downsides of SQL Server Bidirectional Transactional Replication are that:
- The feature uses too many syllables.
- Only two servers can be involved.
- Schema changes are necessary –
IDENTITYcolumns won’t work like you think.
- Did I mention that only two servers can be involved?
SQL Server Merge Replication
This is largely used in server to client environments. You have a SQL Server Express database on field laptops and those laptops are synchronized infrequently. Merge replication has the advantage of smarter conflict resolution than other SQL Server replication options (which typically explode and stop working). Merge replication topology assumes that there’s still only one master server, even though writes can be synchronized in both directions. So, yes, in theory this could work, but data modifications won’t be replicated immediately and a large set of changes could make a merge activity take a long time. In summation – merge replication probably won’t meet your needs because:
- Relies on a single master server.
- Replication is typically batched, not continuous.
- Master failover happens outside of replication.
Yes, I understand that merge replication can be run continuously. Search StackOverflow for merge replication questions and you’ll see what I mean.
SQL Server Peer to Peer Replication
In the SQL Server world, we only have one option for having multiple masters: SQL Server peer to peer replication (okay, there’s a second if you count merge replication). We’ve talked about peer to peer replication before in terms of conflict resolution, managing availability, and as a scale out solution. If you’re interested in how SQL Server peer to peer replication might work for you, feel free to check out those articles. If you’re a bit lazy, here’s the 30,000 foot overview:
- It’s complicated.
- It’s expensive (Enterprise Edition and schema changes are required).
- It probably won’t solve your problem.
- Every node has to be able to handle the write load of all nodes.
What Other Options Are There?
While it’s possible to use SQL Server, it’s always good to consider other options – especially since you need to change your data model to support peer to peer replication. If not SQL Server, what else is there?
Although implementing multi-datacenter seems like a difficult task, it’s already possible, just so long as you’re willing to make some changes. If you’re willing to consider changing your schema, what about the possibility of changing your underlying data storage system?
Before closing this browser window, hear me out.
The features of a software product are what drive the technologies used for implementation. When you need to implement a highly available SQL Server you choose between a few different options based a set of requirements. Why shouldn’t that decision extend to the database itself?
Stretch that Database
Different databases have different capabilities. Databases like Cassandra, HBase, and Riak offer different levels of multi-server and multi-site durability.
HBase offers strong consistency – data is written to multiple servers synchronously, just like SQL Server AlwaysOn Availability Groups using synchronous replicas. Using HBase gives you great single site durability – every write sent to the database server is committed to disk before the client receives a write acknowledgment. There’s no eventual consistency to worry about; HBase is strongly consistent within one datacenter.
If you want disaster recovery for an HBase cluster, you need to deploy HBase replication. HBase replication works by pushing entire log edits through a replication queue – this maintains atomicity for all HBase transactions. Since HBase’s replication operates at a column family level (think of a column family as roughly analogous to an RDBMS table), replication can be configured on a column family by column family basis. HBase replication can’t be used for a multi-master scenario across multiple datacenters, but it can be used to provide an asynchronous disaster recovery scenario.
Cassandra does things a little bit differently. Both HBase and Cassandra derive their data model from Google’s BigTable, but Cassandra differs in how data is moved around. Cassandra is loosely based on Amazon Dynamo – this lets the person using the database decide on the consistency of a write. This choice is one of the primary reasons for teams to choose Cassandra; different features of applications can read and write data at different consistency levels depending on feature requirements instead of platform choice.
Cassandra’s eventual consistency and data replication implementation make Cassandra an excellent choice for designing multi-datacenter applications. Within a Cassandra cluster, it’s possible to configure data replication to be rack and datacenter aware – the database will handle sending writes to other datacenters. Writes can even happen in other data centers and, because of Cassandra’s masterless architecture, those writes will eventually be replicated throughout the database cluster. The gotcha is that the last writer will win, so it’s important that applications be designed to take this into account.
Riak brings something a bit different to the table. Riak is a key/value database that shares a bit in common with Cassandra – they’re both based on Amazon Dynamo. Just like Cassandra, Riak offers tunable consistency levels for reads and writes that give developers the ability to change data access based on application functionality. Riak Enterprise is an add on for Riak that provides datacenter awareness and multiple options for data replication.
Riak has a robust conflict resolution system – although Riak can be configured for last write wins, Riak’s strength comes from the option to create a sibling in the event of a write conflict. A Sibling is just a conflicting copy of the data stored in the database. Siblings become valuable when you need to write data to multiple datacenters without routing writes; instead of routing writes for data ‘A’ to a specific datacenter, users can be directed to the closest datacenter. When data is read, conflicts can be resolved and the correct version of the data is saved back to the database.
It’s difficult to come up with a definitive verdict. Each solution has merit. If writes need to happen anywhere and eventually synchronize across all databases in all datacenters, Riak’s support for siblings and sibling resolution is compelling. Cassandra has compelling developer tooling and a rich suite of drivers. HBase has strong consistency within the same datacenter.
Feature requirements mean that sometimes you need to be able to write your data to one of multiple servers in multiple datacenters. Getting the job done with SQL Server is going to be expensive, difficult, and time consuming. Ultimately, it’s worth looking at other database options to determine if you need to stay with SQL Server or if your data storage can be moved to another platform. Remember – it’s the application requirements that dictate the database platform, not the database platform that dictates application features.
You’ve been working with SQL Server for a couple of years now, and you’ve managed to find your way to some good blogs. You usually Google your way out of problems as they arise, and you know there’s a ton of really cool free DBA learning resources out on the web. You just don’t know where to begin.
There’s so many blog posts, webcasts, podcasts, and e-books, but they’re not in any kind of order, and they’re not curated. It takes a long time to stumble through all the bad stuff just to find one vaguely interesting one that’s also somewhat accurate.
I’ve got the solution: a series of weekly emails organized into an Accidental DBA Training Plan.
We use Ozar’s Hierarchy of Database Needs, starting at the bottom with backups, and work our way up to future-proofing. After six months of lessons using some of the best resources around the web, you’ll feel confident using the term “DBA” in your job title.
Classes start every Wednesday. Subscribe now.
SQL Server 2012 introduced AlwaysOn Availability Groups, a feature intended to replace clustering, database mirroring, log shipping, replication, and other technologies by giving us one easy-to-manage feature for high availability, disaster recovery, and scale-out.
One year in, how’s it working out? I’ve done a lot of AG deployments, and in this 30-minute video, I explain some of the gotchas:
You can learn more at BrentOzar.com/go/alwayson.
We’re happy to announce that we’ve released CorrugatedIron 1.4.0. This release contains a large number of bug fixes, new features, and enhancements. That’s right, brand new Riak and .NET straight to you! Get it while it’s hot or keep reading for more details.
Riak 1.4 Support
CorrugatedIron 1.4.0 brings Riak 1.4.0 feature support including:
- Index pagination & continuations
- Returning index terms with keys
- Better streaming support (now supported by Riak!)
- Query timeout support
- Resetting bucket properties
These new features make it even easier for developers to create rich, full featured applications with Riak and .NET.
CorrugatedIron maintains backwards compatibility with Riak 1.3. Although a large amount of functionality has changed with Riak 1.4, we still allow clients to communicate with the previous version of Riak. In most cases, no code will need to change. Wherever possible, we’ve made sure to let you know in comments or via warnings that code is Riak 1.4 only. Admittedly, you should be able to figure that out – the API is new and you aren’t using it yet.
While we were building support for new Riak 1.4 features, we found a number of places to smooth out the API. We’ve added fluent setters for a number of properties, removed methods marked for deletion, stopped you from storing
nulls in indices (thanks to evanmcc for helping with that one), and host of other syntactic consistency issues.
CorrugatedIron should be much more consistent across the entire API. I’m sure there are more places for us to iron out issues, but we’re aiming to keep improving the API. Speaking of which, there’s now XML documentation for the vast majority of public methods and objects. Within the next few releases, we’ll continue to add additional documentation to the public facing classes so that your IntelliSense is even more useful.
Oh, and all of our unit and integration tests pass.
Get it now!
This also marks our first attempt at locking major and minor version with Riak – throughout the development of Riak 1.4, we’ll keep our major and minor version pegged alongside Riak’s. As Riak development shifts to the next release, so will we. This should make it easier to track which Riak features and functionality are supported by CorrugatedIron.
You like sp_blitz ™, right? That’s probably why you’re here. That or the jazz hands.
Let’s talk about sp_Blitz™. Let’s also talk about being lazy. sp_Blitz™ produces some great output. It provides a nice, information dense summary of potential issues with a single SQL Server. The only problem is that it’s not an easily digestible report. (The other problem is that sp_blitz™ can’t do jazz hands.)
A few weeks ago, we released a Windows application of sp_blitz™ – you can download sp_blitz™ as a ClickOnce application and get started right away. If you’re curious about what it can do, you can watch this delightful video that I made and learn about the sp_Blitz™ Windows application features which include:
- PDF export
- Switching servers
- Auto updating
- Jazz hands*
* jazz hands not included