Watch This Week’s Webcast Today (and Win a Prize Tomorrow)

SQL Server
12 Comments

We always like to innovate — not just with the solutions we design for our consulting customers and in how we teach, but in our free videos, too.

Our YouTube channel has become super popular. Lots of folks watch the recordings of our live webcasts. We stopped recently and asked, “How can we make this even better for the folks who attend our live event?” And we realized: we can give you more time to ask questions about that week’s training topic!

Here’s your mission:

  1. Watch the video below today. We won’t be presenting this live this week or re-covering the material from the video, we’re doing more advanced QA for the folks who’ve already watched it.
  2. Note down questions or comments you have on this post. (This is totally optional, but it means you won’t forget your question and it’s more likely we have time to talk about it with you.)
  3. Attend the live webcast on Tuesday at the normal time (11:30 am Central). Register here.
  4. During the first 10 minutes of the webcast, we’ll give away a prize– but you must be present to win!

The live discussion of the video and Q&A won’t be recorded and published, and you also need to be present to win the prize. See you on Tuesday!

Previous Post
Developers: You Don’t Need a DBA. I’ll Prove It.
Next Post
Should I Run sp_recompile After I Create An Index?

12 Comments. Leave new

  • I am wondering once data for application is present at multiple place through any of the methods above. Is service broker good choice to distribute the incoming query? Mainly thinking in Data warehouse scenario for adhoc queries against database instead of using report or cube when you have to.

    Reply
    • Kendra Little
      August 18, 2014 9:46 am

      Oh, great question! And you’re the first one to post one here, too, so we’ll be sure to cover this. Love it.

      Reply
  • I have recently implemented Distributed Partitioned Views (Updatable) in a VLDB (>3TB) that is a hybrid Batch/OLTP/Reporting system.
    The views are primarily used in the highly transactional (> 1M Rows/Day), semi-raw data pages. The approach makes it very easy to drop older data (generally > 90 days) without the overhead of running Delete jobs.
    Now I am trying to come up with a way to “scale-out” from these tables to put data into a “reporting database” on a separate server. Main objective is to reduce disk I/O caused by the myriad reports.
    I have been considering Replication but now …..
    Standard Edition, SQL 2008R2

    Reply
    • Kendra Little
      August 19, 2014 12:28 pm

      My first question would be what’s the freshness requirement for the reporting workload? Does it have to be up to the second? Up to the hour? Since last night? One problem with replication is that it always starts out fast and you get an effective SLA of “immediately”, then over time it’s harder, and harder, and harder to reach.

      Reply
  • I agree with you about storing Blobs in the database. What are your thoughts on Filestream? We have documents tied to a link on the website which tied with data in the database. We are looking for transaction consistency between files on the file tier and data in the database.

    Reply
    • Kendra Little
      August 19, 2014 12:26 pm

      My main hesitations with filestream are just that it’s expensive and doesn’t fit every scenario. It’s not great for all file sizes in terms of performance. It also locks you into Enterprise licensing, and you’re sharing memory on your SQL Server with the memory for accessing those files, that can be tricky. I find that it’s a rare case where everything comes together that it’s truly the very best solution in terms of cost, tradeoffs, and performance requirements.

      Reply
  • I’ve had decent (in italics!) luck with linked servers, because I can say things to devs like “hey why don’t you try just selecting the few columns you need with a filter into a temp table on the local server?”, or using the REMOTE JOIN query hint when possible in stored procedures.

    Not a lot of great options when you walk into server sprawl that supports connected legacy applications across multiple servers and instances already.

    Reply
  • I’ve recently started at a new company (as their first and only DBA) and I’ve inherited a replication environment that was setup not just for minor scale-out purposes but for reading and writing to the same tables in separate secured environments. Some databases have multiple merge publications and some databases even have multiple merge subscriptions, and then there’s several merge publications within one secure environment to republish to other databases in the same secure environment. There are also a few transactional publications thrown in there for good measure.

    I’ve been reading as much as I can about merge replication, all my prior experience was strictly with transactional, and I’m still figuring out all the nuances of this particular replication setup here. That said, can you think of any alternatives to replication for sharing read/write data between secured environments? Or are there any tips or best-practices regarding merge replication that would help me make it a simpler, more manageable system?

    Reply
    • Kendra Little
      August 19, 2014 12:43 pm

      Oh, wow. I feel your pain. Merge replication can be a really troublesome beast to performance tune, and there’s not much current information out there: a lot of the articles are for SQL Server 2000. The good (?) news is that not TOO much has changed in merge since then, so I suppose the articles are better than nothing. Merge is really a developer tool, and problems / bottlenecks with it vary widely between implementations. Looking for the top queries and sizing SQL Server will be important.

      For read/write scaling out, options are a little scarce in SQL Server itself. There’s peer-to-peer replication, which requires enterprise licenses and might end up a frying pan/fire situation. Depending on the situation, the pain points, and the business requirements, some people who have problems with merge replication just code their own replication entirely in the application, too.

      Jeremiah’s article here may be of some help: https://www.brentozar.com/archive/2011/06/resolving-conflicts-database/

      Reply
  • Great video, as usual! It’s a shame I could’t log for the live webcast. We are using RabbitMQ and it’s have been a life saviour for logging. Use replication and linked servers to…shame on us 😉 Thinking on jumping to SQL2014 and AlwaysOn Availability Groups by end of year.

    By the way, on last slide “Scale up alternatives”, should say “partitioned views” 😉

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.