(Almost) Everything You Need to Know About SQL Server 2014

SQL Server
97 Comments

Just when you thought SQL Server couldn’t get better, Microsoft is announcing the features for SQL Server 2014. They haven’t announced the licensing/pricing, but I’ll tell you what I do know so far.

First, open this in another tab and hit play so you’ve got some background music while you read. Done with the commercial? Okay, let’s get to it:

Cache frequently used data on SSDs. You can specify an SSD (or an SSD array) to be used to extend memory. SQL Server 2014 will automatically cache data there with zero risk of data loss. (Only clean pages, not dirty pages, are stored there.) The best use case is for read-heavy OLTP workloads. This works with local SSDs in clusters, too – each node can have its own local SSDs (just like you would with TempDB) and preserve the SAN throughput for the data and log files. SSDs are cheap, and they’re only getting cheaper and faster. Here’s the questions you’ll want to ask before you use this feature:

  • Is your total actively queried data set bigger than you can fit in memory? Note that I didn’t say all data: you might have archive or history or audit tables in your databases that are never queried, and there’s no sense in caching those.
  • Have you already maxed out the memory on the server? If not, start there first – memory can be used for more than just caching clean pages.
  • Do business requirements force you to use shared storage or magnetic local storage? If not, consider moving the data to local SSD entirely.
  • Does your server have room for locally attached PCI Express or SAS/SATA solid state drives?

If the answer to all of those questions is yes, an SSD buffer pool extension may be for you. Honestly, Microsoft could stop there and I’d probably still recommend the new version for most of my clients, because that’s a killer performance benefit.

More online maintenance operations. Got big data in a partitioned table? Is nobody giving you any time to do maintenance? Just no time to stop and get away cause you work so hard to make it every day? Well, with SQL 14, you can rebuild a single partition’s index online, and you can switch partitions in/out using DBA-specified lock priorities. For 24/7 workloads, this gives the DBA the ability to do maintenance with lower locking, CPU, and memory overhead. There’s also new Extended Events stuff you can use to monitor who’s getting blocked and killed. Here’s how the syntax works:

ALTER INDEX MyIndex ON MyTable
REBUILD PARTITION = 3
WITH (ONLINE = ON (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 5, ABORT_AFTER_WAIT = BLOCKERS)))

The new parameters involved are:

  • PARTITION = 3 – you can pick the specific partition you want to rebuild, and you can do it online.
  • WAIT_AT_LOW_PRIORITY – just hang out when you need the schema mod lock.
  • MAX_DURATION = 5 – wait for up to 5 minutes.
  • ABORT_AFTER_WAIT = BLOCKERS – can be a few different variables. If BLOCKERS, then SQL Server will abort (kill) the queries blocking your index rebuild. If SELF, your index rebuild will give up and let user queries keep going. If NONE, everybody just keeps waiting, doin’ the neutron dance. This is the current behavior in SQL Server 2012, and it’ll be the default.

AlwaysOn Availability Groups get more secondaries. If you really need to scale out your reads, SQL 14 gives you up to 8 secondaries (up from 4). Of course, you’ll be paying for Enterprise Edition licensing on these, but if you were already going to replicate data out to various reporting or BI servers, now your life is easier.

AlwaysOn AG readable secondaries will be more reliable. In SQL 2012, if your primary drops offline or the cluster loses quorum, the readable replica databases drop offline. (This is hilarious because it’s right when you really want to be able to query the secondaries.) No way to control it – it’s totally automatic. In SQL 14, the secondaries remain online and readable when the primaries aren’t available. However, keep in mind that typical AlwaysOn AG connections go through the AG listener name, and then fetch the list of readable replicas from the primary. This just means that in order to keep your report queries online, you can’t use the AG listener – you have to connect directly to the replica’s server name. I like using a separate set of DNS records for readable replicas, like readonly.mydomainname.com, and have my report servers point at those.

Use Azure VMs as AlwaysOn AG replicas. Nobody wants to pay for expensive offsite datacenter space with machines that sit idle all the time. Now, in the AlwaysOn Add Replica wizard, there’s an “Add Azure Replica” button that integrates with your Azure subscription logins. The wizard lets you pick the VM image, VM size (cores & memory), cloud replica name, admin password, etc. Lots of gotchas here though:

  • Initializing the replica means a full database backup/restore from on-premise up to the Azure VMs, too, so this isn’t a great solution for big databases with limited bandwidth.
  • Connectivity from on-premise to the Azure VM requires a VPN appliance from your datacenter to the Azure datacenter, and today that means a hardware appliance, so there’s still some expenditures required. Still way cheaper than buying hardware for a colo, though, and much more flexible.
  • If you’re really going to use it for disaster recovery, you need a Windows Domain Controller up in Azure as well. Without that, when your primary site dies, all of your Windows machines won’t be able to log on, so that wouldn’t be very useful. SSMS doesn’t automate the deployment of a DC (nor does it alert you if you didn’t think this through.)

Failover Cluster Support for Clustered Shared Volumes. With regular volumes, only one node can own the volume at any given time. He owns the entire volume, and no other node can see/read/write files on that volume. However, Windows Server clusters have a type of drive volume called Clustered Shared Volumes with much more flexibility. Multiple cluster nodes can be connected to the same volume at the same time, but each node can access different files on the drive independently. Windows and Hyper-V have supported this for a while (see the BOL section on the benefits), and now SQL Server supports it too. The big benefit here is that if one node of the SQL Server cluster loses connectivity to the storage, it can still read and write data over the network to a different node’s SAN connection.

Smart Backup to Azure. SQL Server 2012 CU2 already lets you back up databases to Azure storage. I hear a lot of people ask me, “Brent, how can I make my backups slower and less predictable?” Those folks loved backing up from on-premise databases over their Internet connections, but that wasn’t enough. They wanted even less predictability, so now they get Smart Backups. With this feature, SQL Server figures out whether it should do a full or differential backup, how often it should do a transaction log, and more. Humor aside, this makes sense for people who host their servers in VM providers with very fast Internet connections that don’t pay for bandwidth – specifically, people hosting SQL Server in Windows Azure VMs. Both of those guys are going to be thrilled.

On-premise SQL Server with data/log files in Azure storage. Great news for those of you who really like juggling chainsaws! Now you get the best of everything:

  • Expensive on-premise licensing
  • Expensive bandwidth costs to the cloud
  • Paying for data storage at Microsoft
  • Slow backups (because your data has to come down from Azure storage to local on-premise memory then back out to wherever you want it stored, and heaven forbid you be dumb enough to send it back up to Azure storage and pay TWICE for bandwidth in and out)

Here’s the syntax:

CREATE DATABASE foo
ON (NAME = foo_dat, FILENAME = ‘https://internetstorage.windows.net/data/foo.mdf’ )
LOG ON (NAME = foo_log, FILENAME = ‘https://internetstorage.windows.net/data/foolog.ldf’);

I’ll just leave that there.

Hekaton: specialized in-memory OLTP tables. If your application is facing serious concurrency issues with thousands of simultaneous connections trying to lock data, Hekaton offers an intriguing solution. I’m not even going to try to type out an explanation here, but I’ll point out a few challenges with it:

  • You’ll probably need to change your data model. For example, identity fields aren’t supported – you’ll need to use a GUID as a primary clustered key.
  • You’ll also probably need to change your code. Hekaton works best with stored procedures, and specifically stored procs that it can compile into native code.
  • It’s memory-only. If you experience sudden data growth in your Hekaton tables, that means you can cache less of your other tables. If you run out of memory – well, let’s just say you’d better pray you don’t run out of memory, because hello, downtime.

You’re so excited, I can feel you getting hotter. But wait, there’s more.

Other cool improvements:

  • Updatable clustered column store indexes
  • Query performance improvements due to better cardinality estimator
  • Resource Governor for IO
  • Sysprep enhancements
  • Wizard to deploy database to Azure VM
  • Separation of duties enhancements to support DBAs who aren’t allowed to read the data, or auditors who are allowed to read the data but not manage the server
  • Windows Server 2012 R2 cooperation improvements – ReFS support, online resize VHDX, storage tiering, SMB improvements

My Analysis

The Pointer Sisters music probably gave away my true feelings here, but really, folks, I’m so excited. There’s very real improvements in here for everybody. If you’re a DBA on a multi-terabyte database, you’re going to love the SSD buffer pool extensions and the granular index rebuilds. If you’re BI-curious, you’re going to be experimenting with the clustered column store indexes. If you’re a software-as-a-service vendor with lots of clients, you’re going to love failover cluster support for CSVs and query performance improvements. And if you’re a developer who works with a SQL Server back end, you’ve got all kinds of new tricks to scale.

I know some DBAs were worried that Microsoft was “all in” with the cloud, and that they’d stop improving the box product. SQL2014 shows that Microsoft is still bringing the awesome.

Now, can somebody just get us a release date and some pricing? I was talking to a PR guy who almost spilled the beans, but he’s so shy.

Previous Post
Words DBAs Should Never Say to Developers
Next Post
Servers Need Expiration Dates

97 Comments. Leave new

  • All kinds of awesome stuff to play with there.
    Any idea at this point if backups will read data from the new cache or from the original IO subsystem?

    Rich

    Reply
    • Richard – great question. I don’t know that one offhand.

      Reply
      • If they’ve done this right it could also be used to do automatic page repair without having to implement Mirroring or AlwaysOn. Limited admittedly, but better than a kick in, in, well anywhere really!

        Someone mentioned deprecated features, can’t be too long before Mirroring is removed.

        Reply
        • It does say mirroring will be removed in future versions (MSDN). I prefer that they keep it a bit longer as I am more familiar with it than AGs; especially for those who are not on 2012 yet.

          It surely is the poor man’s clustering.

          Reply
          • Racim BOUDJAKDJI
            June 6, 2013 11:15 am

            <>
            Well, given the amount of bugs I faced on Windows2008 RHS service and MSCS clusters I am not surprised MS now calls AG the new HA solution. MSCS old single storage clustering topology is becoming obsolete as compared to AG which is much more robust/versatile as far as I could see.

  • Wow there’s some very exciting stuff here.

    For me, the prospect of being able to move away from Replication in SQL Server is very appealing. The complexity that could be removed (distributed systems) is significant. We’ve had some “fun” times together but I’m now ready to move on…..

    We’re also looking into Hekaton. With the App/Dev folks already super keen on in-memory database technology, the ability to transition existing environments easily to Hekaton, versus porting to a complete new DBMS could be a deal winner in terms of cost. Too early to say for sure….

    Reply
  • Great features. What does this mean for the MCM certification. Getting ready for the 2012 exams.

    Reply
    • Cecil – I’d go ahead and do the MCM. The MCM exams aren’t usually available for about a year after the new version comes out, and so it’ll be quite some time before the MCM has 2014-specific questions.

      Reply
  • This post confirmed something I heard during a recent user group meeting about separation of duties enhancments. Thank you!

    Reply
  • Todd Kleinhans
    June 3, 2013 11:43 am

    Any guesstimates on release? PASS 2014?

    Reply
    • Todd – well, Microsoft needs to release new versions every 2-3 years in order for Software Assurance to make financial sense for customers. Because of that, I’d say it’s reasonable to expect it within 2-3 years after the release of SQL Server 2012.

      Reply
  • Great post Brent…Really looking forward for SQL14 !

    Reply
  • Thanks for compiling this great list, Brent.

    SSD’s for local cache looks like the #1 feature – Edition sensitive feature? (Hope not!)

    Do the “Query performance improvements due to better cardinality estimator” stretch to Table-variables yet…? (Do say, “yes”… – fingers crossed but not very hopeful)

    Cheers!

    Reply
    • SAinCA – my guess (and this is just totally a guess) is that the SSD caching would be an edition-sensitive feature because it increases the amount of effective RAM. Since Microsoft has been using RAM as a throttling limit in Standard Edition, I would bet that they’d limit this feature to Enterprise Edition.

      About the table variables – nah, I doubt that.

      Reply
  • Michael Heindel
    June 3, 2013 12:45 pm

    I can’t wait for this to show up in my MSDN Download page 🙂

    Exciting new features worthy of a new name SQL Server 2014.

    Thank you for the analysis!

    Michael
    @BeachBum

    Reply
  • Thankt Brent. what is the role of replication in SQL 2014? Is it still going to play a major portion or is it going to go away now that there are enhancements to AG with 8 readable secondaries.

    Thanks
    Meher

    Reply
    • I think it’s probably too early for them to announce anything about deprecation, but I’ll turn that into a question for you: have there been any significant investments in SQL Server’s replication capabilities in the last few versions?

      Reply
  • Good question Brent. No, I have not personally seen any major feature enhancements (sure there were like SubscriptionStreams, Peer-to-Peer and others) in replication. I would assume there wont be which means it might be deprecated going forward but its too early to say.

    Thanks
    Meher

    Reply
    • Interesting, in Kalen Delaney’s hekaton whitepaper it mentions that replication is not supported. This may of course be a CTP limitation, however if Microsoft are pushing this as the main driver for upgrading then the death knell should be audible soon.

      Reply
  • Nic Neufeld
    June 3, 2013 1:59 pm

    Great rundown!

    One minor point of a semantic / pedantic nature…the abbreviated “SQL 14” is a slightly confusing moniker, if only because versions are referred to variously by the year (name of the product) and the version number, as in, version 9 / 2005, version 10 and 10.5 / 2008 and 2008R2, and 11 / 2012. If we start referring to SQL 12 for example it will be confusing to understand whether we are referring to 11.0 / 2012 or 12.0 / 2014.

    Reply
  • Nic Neufeld
    June 3, 2013 2:08 pm

    “auditors who are allowed to read the data but not manage the server”

    Ahh the elusive server-level db_reader role! Needed that this morning, as a matter of fact.

    Reply
  • WOW, SQL 2014!!! but I just started my certifications for SQL 2012 and yes,it is expensive to get to MCSE on SQL 2012 (5*150=750 + any retakes*150)!!!…I do not want to see SQL 2014 with 5 more exams..:( or maybe I do…I am just gonna go ahead with my SQL 2012 Cert Prep.

    Reply
    • Well, upgrading to MCSE is only 3 exams. And atleast personally I got 2 vouchers for the 2012 exams so I got the MCSA upgrade with them.

      However atleast here the prometric centers take about 40% extra fee on top of the microsoft exam fee and this fee has to be paid even for voucher exams so the MCSE for me was 2*$60+1*($150+$60)=$330

      Also the MCSE isn’t version specific, but you have to update it anyways in three years.

      Reply
  • Excellent post ! As you said, looks like they are still investing in on-premise SQL

    Reply
  • Hi Brent, any BI-targeted improvements/additions mentioned yet?

    Reply
  • Excellent Post Brent. However, has Microsoft Included HDInsight? I have also read in an article that even Big Data and Hadoop feature has been included?

    Reply
    • Shankar – well, that’s kind of a trick question. I’m not a big fan of doing Hadoop with SQL Server. There’s a ton of free ways to do Hadoop, and I’d really rather not spend SQL Server licensing ($7k/core) to work with that. I understand why some folks do, but it’s just not my cup of tea.

      Reply
      • That’s true but there are some very interesting things going on between SQL and Hadoop (based on Linux or Windows/HDInsight) like Polybase where you can query and join hadoop unstructured data with SQL data in a single SQL query, but this is only available in SQL PDW. Any updates or news on this coming to other editions of SQL?
        Also HDInsight provides a Windows friendly way to interact with Hadoop, any news there? Especially with the possibility of on-premises HDinsight clusters.

        Reply
  • Exciting stuff. Working in a process driven environment the separation of duties is really going to be nice here.

    Reply
  • If replication is going the way of the do-do then what’s MS answer when we only want a subset of the database? AGs are still an Enterprise feature as well….

    Reply
    • Kendra Little
      June 4, 2013 1:45 pm

      I don’t know of any major form of replication being marked as deprecated. (Updating subscribers in transactional repl hit the “to be removed” list recently, of course.) I would be incredibly surprised if it were removed completely because of the flexibility it offers in terms of allowing a subset of the data to be sent and separate indexes put on it.

      Now, would I really recommend a peer to peer topology at this point, even if it isn’t deprecated? Perhaps not because of the interoperability and management limitations that I think will be around for a looong time. But do I personally predict a long future for transactional replication? Certainly. (That’s not based on any insider knowledge, it’s just based on how widely used I’ve found it to be and the fact that it’s a mature feature with an established niche.)

      At this point what’s enterprise vs standard edition for 2014 is all speculation. We just don’t have info on that one.

      Reply
  • Racim BOUDJAKDJI
    June 4, 2013 12:00 pm

    Hi Brent,

    Thanks for the summary on SQL 2014 (I particularly like the VNext updateable column indexes and controllable partition online switching I have dreamt onto since 2005). Furthermore, it looks like MS is taking serious initiative over the current inadequacy of IO subsystems (namely SAN systems) into coping with data growth. This move will clearly encourage people to move toward peer to peer hybrid local storage. The trend seems to confirm the end of SAN topology as a both economically and technically viable option for database storage.

    Reply
  • 2012 was such an uneventful release. Just some small, nice feature. No true drivers of adoption. 2014 has some killer stuff in it.

    For example CS indexes being readonly, non-clustered probably prevented 90% of the adoption. Now that’s gone.

    Reply
  • Lori Edwards
    June 4, 2013 2:57 pm

    Dearest Brent and crew,
    I really, really wanted to read this blog post, but was unable to sit still enough once the Pointer Sisters started. Perhaps you should warn of these potential side effects prior to linking to freakin’ amazing songs.

    Thank you,
    Lori

    Reply
  • Hi, Brent!

    Is there some public info from Microsoft related to “Cache frequently used data on SSDs”?
    Can’t find anything even in SQLNDA DL 😉

    Reply
    • Hi, Dmitriy! Depending on your NDA access, you can ask about buffer pool extensions – that’s the official name for the feature. Unfortunately, what I’ve shared here is all I can share right now. 😀

      Reply
      • kennie nybo pontoppidan
        June 7, 2013 3:24 am

        It is mentioned in 3 lines in the white paper “Mission Critical Performance & Scale
        with SQL Server and Windows Server” dated May 29 2013

        Reply
  • Having just finished moving off of SQL 2000 I’m more worried about how quickly SQL 2005 is going to out of support. With a release every 2-3 years I’m hoping they extend support to 4 versions rather than the old 3 (assuming that R2 didn’t really count).

    Reply
  • Thanks for sharing, Brent!

    Reply
  • I guess this will drop me yet another version behind the power curve :(. I am still trying to get major front offfice applications to move off of SQL2000SE on Windows 2000 SE (even if they stay in 8.0 compatibility mode).
    I at least have the back office apps headed to SQL2008 (some in 8.0 compat) and hope to complete that before SQL 2014 is released (HA, HA, )

    Reply
  • Awesome!
    Do you know something about licensing in SQL Server 2014 – what I mean, what license for which feature 🙂

    Reply
  • Regarding Hekaton (memory optimized tables), since we’re talking no latching/locking are we one step closer to true scale-out ie like Oracle RAC/Grid but with all OLTP happening in multi-node shared memory ? Perhaps in SQL Server 2016 ?

    Reply
    • LondonDBA – you know, it’s an interesting question. I hear the RAC-type request a lot, and then I ask to look at the client’s system. I’ve never seen a SQL Server environment that actually needed that type of setup – we’ve always been able to get to their concurrency/scalability goals by doing index and query tuning.

      I suppose that if money was no object and you’d rather throw money at the problem than improve code & design, then I can see how the RAC-style setup would have an appeal. I just haven’t seen that type of client yet though.

      Reply
  • Awesome features. However, Read replicas Load balancing feature not there…

    Reply
  • Thanks for the real-world break down Brent!

    Reply
  • Awesome post! Any idea if memory optimized tables could use a col with a sequence default instead of identity pk? That’s too bad about the problem of an in memory table running out of memory and just refusing inserts. Perhaps a view with some clever instead of insert trigger action could be a workaround.

    Reply
  • As usual Microsoft tries to catch up with Oracle. DB Cache on SSD drives? So 2009…
    Rebuilding partitioned indexes online? Implemented 2 versions ago in Oracle.
    Failover/availability features on RAC won’t be matched even in 2014.
    Well, I’m still buying Microsoft stocks, because of new X-box 🙂

    Reply
    • Aleksey – yep, and they’re catching up on licensing costs too. Microsoft’s new core-based licensing is catching up to where Oracle was years ago, but from what I hear, still hasn’t quite caught up yet. 😉

      Reply
  • Hi Brent,

    Great post. Is there any improvements for Spatial data? Any support for 3D geometery points?

    Thanks,
    Mohammad

    Reply
  • Your alternative suggestion for replacing identities is: “You’ll probably need to change your data model. For example, identity fields aren’t supported – you’ll need to use a GUID as a primary clustered key.”

    What would you suggest if the primary clustered key still needed to be sequential and wouldn’t a non-sequential primary clustered key be counterproductive?

    Reply
    • Darren – I’d ask, “Why does the primary clustered key still need to be sequential?”

      Reply
      • OK, here’s the scenario: you’re putting together a design for a new data model & want to ease any later transistion to the use of in-memory tables. One regular practice for clustered indexes, as you know, would be to use a sequential key to avoid fragmentation & hence give an ordering to row inserts so they occur in the same approx. physical location. Also, wouldn’t using a GUID significantly bump up the memory overhead per row, which would be even more of an issue for in-memory tables?

        Reply
        • Great question. Design is about trade offs. If you’re building a vehicle that may need to hold 8 people, you build a minivan. If you need to go 200mph, you build a small sports car. If you need to take 8 people at 200mph, you build a plane.

          But if you’re building a car that needs to go 200mph today, and may need to hold 8 people a couple of years from now, you don’t build a sports car with wings and say, “The wings get me closer to having a plane.” They don’t.

          I’m not saying you can’t plan for the future, but Hekaton is a plane. If you don’t need that kind of power, don’t tack wings on your app and hope it will make an easy transition later. Hekaton solves a very specific problem – high concurrency with latch waits. It’s for apps that deal with tens or hundreds of thousands of batch requests per second.

          Reply
    • So what about NEWSEQUENTIALID(), is that disabled too?

      I’m a bit mystified about why Microsoft would see a need to drop IDENTITY for Hekaton at all, surely we could grow our own on such fast tables, create the good ol’ next available value in a Hekaton table and use it? But then, so could Microsoft as a SQL Server feature. I give it fair odds they’ll find a way to shoehorn it back into SQL 2014 and if not how about SQL 2016!? I mean yes it may have some impact and cut Hekaton table back to a mere Dodekaton or something but hey.

      Reply
  • Is it known what limitations will be on SQL Server 2014 Express Edition?

    Reply
  • Hi Brent ,

    good article , just wanna point below is out of date due to recentish announcement of RRAS support

    “Connectivity from on-premise to the Azure VM requires a VPN appliance from your datacenter to the Azure datacenter, and today that means a hardware appliance”

    http://blogs.msdn.com/b/windowsazure/archive/2013/04/26/virtual-network-adds-new-capabilities-for-cross-premises-connectivity.aspx

    Reply
  • Does 2014 address the first run performance hit stored procs have always endured due to compilation? (e.g. something like Oracle’s binary version of the stored proc PLX, instead of parsing text when the sproc is first called)

    Reply
    • Nick – for typical stored procedures, no. If that initial hit presents a serious problem for you, what some folks do is create a SQL Agent job to execute stored procedures and populate the buffer pool upon startup.

      Reply
  • Nathan Jolly
    July 5, 2013 1:33 am

    Like you suggested, this article indicates that Buffer Pool Extension will be an Enterprise-only feature (and only supported on 64-bit):

    http://msdn.microsoft.com/en-us/library/cc645993(v=sql.120).aspx

    Only sections that have changed to far are “RDBMS Manageability”, where Buffer Pool Extension are new and marked as EE only, and “Reporting services”, where a few items and one restriction seem to have been removed.

    Reply
  • Hi

    Great post. Thanks for sharing.

    Reply
  • Subir Sankar Das
    July 18, 2013 2:09 am

    SQL Server 2014 CTP1 has been introduced with In-Memory concept, can u please help me, if I have 16 core CPU and have memory 16 GB and my Production DB size is 50 GB and currently I am using W2K8 Server with SQL Server 2008 R2. If I go for SQL server 2014 CTP1, is it required any extra memory upgradation, as because SQL server 2014 is memory intensive.

    Regards,

    Subir Sankar Das

    Reply
    • Subir – if your bottleneck is memory, I’d suggest adding more memory. 16GB isn’t much. Standard Edition will use up to 64GB, so might as well put 64GB in there. It’s a lot cheaper than you might think (probably around $500), and way faster than waiting for the next version of SQL Server. Hope that helps!

      Reply
      • Subir Sankar Das
        July 18, 2013 10:05 pm

        Thanks Brent for your prompt reply…If I upgrade my database to SQL Server 2014 , is it okay with Existing W2K8 Enterprise edition or required 2012 for better performance.

        Reply
  • Thanks Brent for valuable inside on SQL 2014, I am having CTP1 with me.

    I am having challenge with Always On, we need Always On setup with Primary on SQL 12 and Secondary on SQL14.

    is it possible also how about Primary in SQL14 and secondary in SQL12.

    Reply
    • Arun – SQL Server 2014 is not released yet, so I wouldn’t recommend using it in production.

      Even at release, though, you won’t be able to have a primary in SQL Server 2014 and a secondary in an older version. SQL Server databases can never be restored on older versions, only onto newer ones.

      Reply
      • Thanks Brent for quick reply.

        We are part of TAP and have plan to go live with CTP 2 in production.

        Regarding Always On, we too had reservation that having 2014 as primary and 2012 as secondary may not work. But right now we are facing challenge even with setting up 2012 as primary and 2014 as secondary.

        Reply
        • If you’re in the TAP, be careful – your company has probably signed a non-disclosure agreement, so you’re not supposed to be talking about this publicly. Good news though – this also means you can get quick support from Microsoft on this during your weekly TAP call with them.

          Reply
  • OMG, now Resource Governor can be REALLY useful =)
    Great post, thanks for the info.

    Reply
  • Are there any updates to SQL / T-SQL? New language features / new functions?

    Reply
    • Lukas – not off the top of my head, but of course there’s an entirely new storage engine in the form of Hekaton, and you’ll want to learn how that affects the way you code. Not all commands or datatypes are supported, so it’s sort of a backwards step for now, but it can pay off with much faster performance if your code fits that smaller feature subset.

      Reply
  • Microsoft seems to be very quiet about a release date for SQL 2014. Does anyone have any SOLID information they can share? Please skip the rumors, things a friend of a guy I met at the bar said and any things some random person tweeted about last Tuesday at 3 AM.

    Reply
  • Brent – mind telling if there is a theoretical memory limit to SQL Server 2014? I read of something like 64GB but as I go thru a lot of documentation from MS – there is a lot of talk about 1TB…is that a day dream or something that we can test and measure?

    Reply
  • I think replication is still part of 2014;

    “Replication is supported if each SQL Server endpoint is within two major versions of the current version of SQL Server. Consequently, SQL Server 2014 does not support replication to or from SQL Server 2005”

    Ref:
    http://social.msdn.microsoft.com/Forums/en-US/06a7777b-ee1c-494a-95d9-4402220b2bfd/will-deprecated-items-change-in-sql-server-2014-rtm?forum=sql14db

    Reply
  • Hi Brent , was not able to interpret your following stmt.

    The big benefit here is that if one node of the SQL Server cluster loses connectivity to the storage, it can still read and write data over the network to a different node’s SAN connection.

    is it like if there are 2 nodes having different SAN connection, and if one node ( active) looses connection, then SQL will use passive node SAN connection.

    Reply
  • YouTube now returns “not available” when attempting to open the initial link, “open this in another tab and hit play” (http://www.youtube.com/watch?v=h-LbvFckptY).

    Reply
    • Note: for completeness, the same link, “Pointer Sisters music” also seems to exist at the bottom of the post. Pointing out, just in case maintainer has an interest in keeping links current; if not, not a problem.

      Reply
  • Hi Brent,

    We have an application where we need to handle approx 1000 requests per sec and each request involves calling a stored proc which selects and inserts from the same table. So we have only one table that will grow huge and is transaction heavy. Does this in-memory OLTP solves our problem? Or are there any other approaches to handle such load.

    Currently we are on SQL Server 2008 R2 and it peaks out at just 150 requests / sec.

    Any suggestions are most welcome.

    Thanks
    Amit

    Reply
    • Amit – 1,000 requests per second isn’t that big of a deal for modern database platforms – regular tables can handle that just fine – so let me ask about the second part of your question: do you mean by “grow huge?”

      Reply
  • We are upgrading all our applications from SQL 2005 to SQL 2014.
    Is there any document or something which would give me a case study on SQL 2014 HADR strategies ?
    Your help is highly appreciated, thanks in advance

    Regards,
    Srikanth Amjala

    Reply
  • Awesome Brent. Love reading your articles written in simple and plain English.

    Reply
  • Quick question, would you host a critical production database on a ReFS partition? I’m getting the 665 error of fragmentation on an NTFS partition and was wondering if I should try ReFS . But from what I’ve read so far, it seems too unstable.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.