What Would You Put in SQL Server 2019?

If you were leading Microsoft’s development programs, what would you add or change in SQL Server 2019?

Forget licensing or pricing changes – that’s dictated by the marketing teams, and those decisions are made really close to the release date. Instead, let’s think bigger: what are the really big improvements you’d make that would really drive increased adoption?

What features do you think would win people over from expensive platforms like Oracle or DB2, or would you focus on the open source MySQL or PostgreSQL crowds, or the outright NoSQL users?

Erik Says: We used to make posts like this half jokingly, but with MS embracing Community Driven Development, stuff like this can potentially go a lot further. Seriously, take the time to comment, or write your own blog posts about features you want. If you keep quiet, it’s your fault if you don’t see what you want in the product. Me? I’m still pulling for unlogged tables.

Previous Post
The Ghosts of Temp Tables Past
Next Post
Breaking News: Using TDE on 2016? Doing Backups? Time to Patch.

184 Comments. Leave new

  • Remove the requirement for In-Memory OLTP when using memory optimized table types.
    It’s probably a niche requirement, but maybe it goes hand in hand with Erik’s unlogged tables.

    • Oh wow, suddenly that makes a light go on for me. At a client recently, somebody had enabled Hekaton, and nobody remembered why. I bet that was what it was.

  • For starters…..

    Multiple tempdbs like Sybase has… you can then bind that tempdb instance to a specific login

    sp_tempdb bind, ‘LG’, ‘Menace’, ‘DB’, ‘tempdb_01’

    that way a person’s crazy queries don’t trash my tempdb 🙂

    Also maybe some new types of indexes like bitmap, Compare (CMP) index type or High_Non_Group (HNG) index type (the last two are from Sybase IQ)

  • I don’t know if this is functionally or logistically feasible, (or a good idea), but I would love to see the list of things you can’t do with indexed views get lessened a bit. Distinct, a count other than count_big(*) would be helpful, as would self-joins, unions, and window functions. I’ve had so many cases where an indexed view would make a night and day improvement, only to have it run afoul of one of the laundry list of limitations.

    I’d also love to see them figure out how to make full text search handle/tokenize the dash/hyphen character. Make it easier to format a date or datetime without needing to remember an obscure 3 digit code. Oracle does this via the to_char function, which you supply the intended format as a string like ‘mm/dd/yyyy’.
    Synonyms to (just) a remote server and/or database would be a game changer. I’d also be curious to see what they have to say in regards to instance-level clustering, à la Oracle RAC, as an additional HA/DR option.

    In lieu of any or all of the above, I’ll settle for string_agg being put (back) into 2016.

    • Brandon,

      You can already format dates like you want with the FORMAT function

      example

      DECLARE @d DATETIME = GETDATE();
      SELECT FORMAT( @d, ‘MM/dd/yyyy’ ) AS ‘Formatted’

  • This is probably nonsense, but dynamically partitioned tables.

    • I would love this and upgrade to 1,000,000 partitions per table like Oracle. This would save a ton of Partition management code for applications.

      • Ted – interesting. What’s the real-world use case for over, say, 10,000 partitions?

        • Well we use it to optimize data loading and querying optimizing in one of our data-marts where we partition basically by a client project. And because of the 15000 limitation, we have had to devise a means of extending partitioning with a group of partitioned tables under a partitioned view. Which is of course cumbersome and a pain to manage. Not to mention all the lock contentions we have had to deal with trying to create partitions on the fly, rather than pre-allocating, which we finally ended up resorting to to reduce blocking. Its a sorted tale.

  • Automatically converting “delete [tablename]” to a truncate statement. 🙂 I would totally enable this on all DEV servers!

  • RAC so that the database can remain online during maintenance and column level TDE. Two things that my Oracle coworkers like to brag about.

    • Something equivalent to RAC would be great, something like a true active/active cluster for SQL. Stop one cluster node, clients connected to it maybe get a brief disconnect, transactions finish processing before the connection is switched, so nice…
      Yes, I’ve had to work with Oracle as a DBA for a few months, I learned more about it once we got a real Oracle DBA hired in.

      • RAC / active/active clusters would be awesome! Real HA.
        It’s always fun explaining active/active SQL clusters to an ORACLE DBA, you can tell the polite ones – they hold back the laughter.

  • SP_WHO2 replaced with sp_whoIsActive :). An invaluable tool.

  • Along the non-logged table request, how about non-logged temp tables in TempDB as a start? I use scratch tables in tempdb for reports and I really don’t need them to be logged.

  • create a group_concat function like MySQL

  • a MEDIAN function would be nice to go with all the existing windowing functions.

    • Doesn’t that already exist, PERCENTILE_DISC and PERCENTILE_CONT? And you can use them with a windowing function.

      • I’d love for both of these functions to be normal aggregate functions. Currently, they only work as window functions.

        I often pull Median, Top 25%, and Bottom 25% scores for reports. Changing the grain (PARTITION BY) means updating several functions rather than just the GROUP BY.

  • Vote #2 for parity on Indexed views with Oracle’s materialized views. There are too many restrictions on the MSSQL side.

  • The resource governor includes memory but not buffer pool memory.
    I would like some way to control buffer pool memory usage.
    My real time sensitive DB should be able to keep 1GB in memory and never have it kicked out.
    My reports should be able to use up to 128 GB but not more.
    I could put my real-time db in memory optimized tables,
    but if I could control the buffer cache it works well enough as is.
    SQL has LRU-K but reports can still kick everything else out of memory.

  • Sorry for repeating myself for emphasis, but group_concat is the ONLY thing I’ve ever found in MySQL that I wished I could have in MSSQL. And I’m a longtime user of both MySQL and MSSQL.

  • Allow, at the column level, storing NVARCHAR as UTF8 instead of UTF16. That would save a lot of memory and storage when working with mostly values in Western languages.

    Data compression of LOBs

    A command line tool that allows streaming backups and piping it to compression or encryption programs like XZ or GZIP. I bet Linux admins would love this.

    A trace flag that makes the instance ignore all auto-close and auto-shrink settings at the database level.

    For consolidation, a way of walling off applications within an instance that insist on their logins having excessive security privileges (especially sysadmin). More privilege isolation, but without the other downsides of instance stacking.

  • I would like an old feature back. Single table restore, to a different name. Yes, yes, referential integrity alert and blah blah. Almost every client I deal with has many tables that can easily be restored independently for manual correction purposes or validating changes or other detective work that needs to be done from time to time. Doing the full db restore to dig out one table isn’t always that easy when the dbs are large and space is not available.

  • Microsoft has finally implement support for JSON data in SQL Server 2016 and in Azure SQL Database. In a previous blog post I JSON and the NoSQL “advantage” over SQL Server I discussed which JSON support was missing from SQL Server and I’m very happy to see this much of this functionality coming. Microsoft has done some nice work with JSON support, but sadly I find that is not really as comprehensive as I had hoped.

    Real JSON support to put SQL Server equal to many NoSql databases.
    Missing in the SQL Server 2016 implementation of JSON is the fact that it includes no JSON-specific native data type and consequently none of the kinds of methods available to the for example XML data type. SQL Server 2016 continues to use the NVARCHAR type to store JSON data. This one of my biggest disappointment and after working with it still feels incomplete. The workaround to add a constraint with the ISJSON function to make sure our data is of the right type, but it still fell like a patch and when you think about indexing JSON you KNOW it’s a patch. To add an extra non persisted column in the table for each index you need is not good, it sucks.

    • They need to follow in the footsteps of Postgres on this one, the JSON implementation in it’s current state has so many problems/shortcomings.

      • Improvment to the current JSON implementation is Critical else NoSql options are the only option.

  • Vladislav Zalesak
    September 20, 2017 11:59 am

    partitioned identity for composite primary key?

  • I’m going to take off my employee hat for a second and make a for realsies suggestion: Per database wait stats.

  • I agree with Andrew that single table restores would be great. Also, it would be great to have Oracle’s flashback feature – flashback tables, flashback an entire database…

  • Readable Job agent job names for Report Subscriptions.

    • Ha! I’d even accept if they kept the GUID but prefixed the job names with “ReportServer – ” so that they group together in SQL Agent and don’t litter my actually legitimate job list with hundreds of things I don’t care about.

    • THIS ONE.

    • SELECT
      s.ScheduleID AS JobName
      ,c.Name AS ReportName
      ,sb.[Description] AS Recipients
      ,c.[Path] AS ReportPath
      ,StartDate
      ,s.LastRunTime
      FROM dbo.ReportSchedule rs
      INNER JOIN dbo.Schedule s ON rs.ScheduleID = s.ScheduleID
      INNER JOIN dbo.Subscriptions sb ON rs.SubscriptionID = sb.SubscriptionID
      INNER JOIN dbo.[Catalog] c ON rs.ReportID = c.ItemID
      AND sb.Report_OID = c.ItemID

  • FLASHBACK… would be useful 🙂

    • +1 for serious – this is one of only two features I care about where Oracle completely wins. I know we have snapshots and now temporal tables are an option, but the ability to execute a flashback query and get a prior version of any object from a prior moment in time (within the log space) would be game-changing.

  • Transactional In-Memory objects DDL operations, turn off error when in-memory object is created and there is DDL trigger on database, max nestlevel raised to 1024, ALTER TYPE support, T-SQL callstack on-demand in SQL procedures, hash partitioning seek transformation rule, performance improvement when getting fragmentation information for large indexes… isn’t that too much ?

    • You can ALTER Hekaton objects in SQL Server 2016…
      “max nestlevel raised to 1024” – Are you crazy? Nested objects should be only used where there is no other way… for many reasons (maintainability, performance etc) – 32 is not enough? Give me please example when you need to build for example 100 nested views and why.

  • Get Snapshot Isolation out of tempdb, making it as configurable as possible.

  • Have the optimizer use its machine learning knowledge to be able to abort an execution plan when its expected rows is badly out from actual and then use a different plan.
    Also have the query store mark if an execution plan was a timed out plan and not consider it for future use.

  • faster ways to copy really really large table. Recently encounter with a table of 1.5tb transnational data. Cloning or built-in ddup type stuff would be really cool.
    Other one import/export data from/for document databases (mongodb etc ).

  • I want a couple small things, and one big thing

    -Per DB wait info. persisted through restarts
    -Native support for “specific table only” restores
    -native support in ssms an apex style log explorer
    -fix the gui elements that fail when trying to load strings that are longer than the temp tables that hold values for the element.
    -I would also REALLY like some more built in auditing on certain things. I’d love to know which dba disabled the backup job, rather simply ‘last modified on xyz’

    now the big one
    -system db sync options in AG

    • I know replies above mine have this but I figured I’ll add my voice…

      non logged tables or non logged deletes. that’s probably my second most wanted feature.

      • If you use partitioning you can now truncate a partition. Not the same as non-logged deletes, but a step in the right direction.

  • Two little T-SQL things:

    1. Like in Oracle, materialized hint on CTE’s so :
    WITH CTE AS ( /* MATERIALIZE */ SELECT….)

    2. Allow windowing functions in where clauses:
    SELECT …. WHERE row_number() over(partition by…) = 1

  • SQL Server needs something similar to Grid Control for Oracle. The Central Management Server is nice, but doesn’t provide nearly the same functionality as Grid does.

  • Assign alternate mail profiles to specific Agent Jobs, Alerts, and Operators. Perhaps even user accounts.
    https://connect.microsoft.com/SQLServer/feedback/details/817402/assign-a-mail-profile-to-jobs-and-alerts

    Allow groups to own agent job, or at least have a mechanism to allow adding user(s)/group(s) as authorized to modify a job.
    https://connect.microsoft.com/SQLServer/feedback/details/508137/allow-windows-groups-to-own-sql-agent-jobs

    When connecting to an Availability Group listener, have the ability to only show databases which are part of the AG.
    https://connect.microsoft.com/SQLServer/feedback/details/1206106/availability-group-setting-to-hide-databases-not-in-the-same-ag

  • Fix/address multisubnetfailover / DNS Round Robin / connection timeout when connecting from a network with a different primary DNS server…. i.e. AG Listeners failing over to a different subnet in AWS.

  • This really goes far beyond SQL, but if we are EVER going to be able to implement proper User auditing inside the database, connection pooling needs to be ROLE-based rather than User-based. This will allow application developers to actually use Kerberos authentication instead of a single application user when designing their apps.

  • A way to recover from accidently deleting or updating the wrong table/information where you could select from the previous 20 statements made in that instance and click a “rollback” button.

  • Hi,
    in SQL Server I would like to have a view like sys.databases with additional rows for alwayson like ag name, primary replica, readable secondary,… just to avoid a whole lot of joins to find out on which databases I run my maintenance and log backup scripts.
    Also I would love to have the already mentioned per DB wait stats and less limitations for indexed views.
    Multiple TempDBs would be also great. And putting in more options for resource governer sounds like a good idea.

    For SSMS I would love a logging for my executed queries like the one in ApexSQL Complete. And goddamn when will they ever finish the dark theme?

    All the best
    Gerald

  • Here’s a few (mostly developer/T-SQL focused):
    – Index skip scans (https://connect.microsoft.com/SQLServer/feedback/details/695044/implement-index-skip-scan)
    – FILTER clause for window functions
    – Better handling of dynamic SQL (another string option beyond ‘?)
    – Dynamic PIVOT/UNPIVOT without dynamic SQL
    – Switch to Oracle style query hints (in comments), with support for hints in sub-queries/derived tables/CTEs
    – More statistics steps or “nested” statistics (we have some cases of many filtered statistics to improve
    performance)

  • Probably a niche request but in the past I have thought that it would occasionally be handy to have a feature that allows substitution of a query with a different query at execution time. Identification of queries to substitute could be based on templates in a manner similar to Plan Guides).

    I have encountered scenarios with vendor applications (generally those using dynamically-generated SQL rather than stored procedures) where the use of plan guides has not been sufficient to improve performance to an acceptable level. A feature like this would allow substitution of such queries with something sensible and tunable, giving an element of control back to the DBA where the vendor is unable or unwilling to help.

  • A minor one but really bugs me. Subscriptions in SSRS create agent jobs with GUIDS. Because we have thousands of subscriptions setup it floods the agent job listing which is very annoying. I’d like the ability to configure some setting so they don’t show.

    • Being able to make a treeview folder arrangement of which database/function they’re for could help with this. Then I could put my DBA scripts apart from my backup/restore, apart from my db-specific triggers, and keep them away from any vendor, SSRS, etc.

  • I would create multiple editions. One would be for simple storage of LOB data that doesn’t require an entire industry of experts to make run well.

  • Something like the flashback recovery and restore points in Oracle will be very beneficial .. starting from DB level up to query level.

  • performing asynchronous operations, might sound odd from a traditional standpoint, but it would really come in handy.

  • Implement the Packages in SQL Server Newer Version, similar to that of Oracle, this will more useful for many business need. Actually Microsoft should be implement this in earlier version of SQL Server, but till now this was not happen. In my personal experience, during migration from Oracle to SQL Server, have facing this issue. We did lot of re-work on this.

    • don’t you get the same idea (Oracle package) when using a SQL Server schema?

      • I would have to say no, because with Packages in Oracle you get the ability to hind functions inside, so its almost like a module with public and private methods and attributes.

        • You can hide functions inside schemas as well. I must admit that SSMS is not user friendly about it. SSMS is not showing all the schema objects (regardless of the type) in one place, but rather separated.

          • Its really more about encapsulating all the relevant code in one nice little “package”. Of course SQL would have to call it a module, or component, or some other more obscure thing. 🙂

  • * Enable batch mode for queries that don’t have a columnstore index.
    * Fix the “data has been truncated” error.
    * data lineage tracking in SSIS
    * Better ETL in the cloud
    * SSRS in the cloud (not just installing it in a VM)
    * Power Query integration in SSIS
    * A decent parameter pane in SSRS

  • Built in support for a n-grams style index. e.g.
    https://sqlperformance.com/2017/09/sql-performance/sql-server-trigram-wildcard-search
    https://www.elastic.co/guide/en/elasticsearch/reference/5.4/analysis-ngram-tokenizer.html
    A SSMS that doesn’t crash all the time. 64bit please!
    A simplified queue implementation. Service broker is good but a over engineered for many people.
    Ability to full-text index blobs stored completely out of SQL – s3, Azure, file share etc.
    A built in function to provide a table of numbers.

    • SSMS is now built on top of the Visual Studio shell, so going 64-bit is probably a no go (because VS is not available in 64-bit and they’re probably never going to make one…)

    • Yes, I find that SSMS crashes when you have connected to too many servers. I also find that not all updates to queries are saved when it crashes (SSMS seems to do a better job creating temp .sql files for unsaved queries than unsaved changes to existing .sql files; should at least create temp .sql files for all queries).

  • A flawless implementation of the MERGE statement, that could be used safely even in OLTP environments.

  • A “cross exec” syntax to supersede iterating using cursors and executing a stored procedure for each row:

    select
    from
    cross exec
    @param1 = ,
    @param2 = …

    • My angle brackets were stripped out! I mean

      select [some fields and output parameters]
      from [some query]
      cross exec [some procedure]
      @param1 = [some field],
      @param2 = [some field]…

    • If you would use a function, you can use APPLY. Is that an alternative?

      • “apply” is extremely useful in a lot of contexts, but not if you want to do something with side-effects.

  • Alexey Kuznetsov
    September 21, 2017 5:37 am

    Native support for Autonomous transactions (so that we don’t have to mess with loopback linked servers, CLR stuff etc… in order to implement them)

  • How about some form of security ACL for SQL agent jobs?
    Too many times we have to grant sysadmin to deployers to modify/delete existing jobs that are owned by another login. Or we end up doing the deployment ourselves 🙁

  • I’ve been wanting something like the following for a long time:

    Update MyTable Set MyColumn = WITH (BATCHSIZE = 10000)
    or
    Insert Into TargetTable
    Select * From SourceTable WITH (BATCHSIZE = 10000)

    Basically SQL Server would insert, update or delete rows in transactional batches of whatever size we designate (10,000 rows in the examples above). Thus, we don’t have to write tedious code to accomplish the same thing. Granted, you could run into inconsistencies if a failure occurs. But, this syntax would only be used where you know inconsistency issues wouldn’t be a problem if an error occurs.

  • How about make SQL Logins and Jobs part of the DR strategy so we don’t have to keep them in sync manually when using SQL based DR (AG,log shipping etc you get the idea)

  • Bert Van Landeghem
    September 21, 2017 9:16 am

    – A ‘table’ target type for extended event sessions.
    – A tool to parse XEvent files (like sys.fn_xe_file_target_read_file) outside of SQL Server so that we can e.g. hack away at the XML with .Net code instead of T-SQL.
    – Same goes for the offloading of SQL Audit logs.

    • You can already parse XEvent files using .NET. It’s much faster from my experience than using the built in function,
      using Microsoft.SqlServer.XEvent.Linq;

      This might give you a head start:
      private static DataTable XELToDT_performance(string fileName,string serverName)
      {
      QueryableXEventData events = new QueryableXEventData(fileName);
      DataTable dt = new DataTable(“XEL”);
      dt.Columns.Add(“server_name”, typeof(string));
      dt.Columns.Add(“object_name”, typeof(string));
      dt.Columns.Add(“timestamp”, typeof(DateTime));
      dt.Columns.Add(“duration”, typeof(Int64));
      dt.Columns.Add(“cpu”, typeof(Int64));
      dt.Columns.Add(“logical_reads”, typeof(Int64));
      dt.Columns.Add(“physical_reads”, typeof(Int64));
      dt.Columns.Add(“writes”, typeof(Int64));
      dt.Columns.Add(“username”, typeof(string));
      dt.Columns.Add(“database_name”, typeof(string));
      dt.Columns.Add(“host_name”, typeof(string));
      dt.Columns.Add(“client_app_name”, typeof(string));

      foreach (PublishedEvent evt in events)
      {
      if (evt.Name == “sql_batch_completed” | evt.Name == “rpc_completed”)
      {
      var r = dt.NewRow();
      r[“server_name”] = serverName;
      if (evt.Name == “sql_batch_completed”)
      {
      r[“object_name”] = “{sql_batch_completed}”;
      }
      else
      {
      r[“object_name”] = evt.Fields[“object_name”].Value;
      }
      r[“duration”] = evt.Fields[“duration”].Value;
      r[“timestamp”] = evt.Timestamp.UtcDateTime;
      r[“cpu”] = evt.Fields[“cpu_time”].Value;
      r[“logical_reads”] = evt.Fields[“logical_reads”].Value;
      r[“physical_reads”] = evt.Fields[“physical_reads”].Value;
      r[“writes”] = evt.Fields[“writes”].Value;
      r[“username”] = evt.Actions[“username”].Value;
      r[“database_name”] = evt.Actions[“database_name”].Value;
      r[“host_name”] = evt.Actions[“client_hostname”].Value;
      r[“client_app_name”] = evt.Actions[“client_app_name”].Value;
      dt.Rows.Add(r);
      }
      }
      return dt;
      }

    • Tomer Weisberg
      June 4, 2019 1:35 am
  • Give us finally a TRIM() function – this is not biggy, but I’m getting blinded by seeing LTRIM(RTIM())
    I would really like to see SQL Server Agent getting some TLC. I think this part of SQL Server have suffered from a deep lack of attention for 10+ years. Make it a true Enterprise Scheduling platform (like ActiveBatch), with ability to have:
    A true workflow like SSIS, with a full control on steps, constraints and a graphical interface
    SQL Server Agent pool of servers
    Chaining Jobs across multiple SQL Servers
    Central point of scheduling
    Pass variable and parameter information from SQL Server Jobs directly to Jobs on other Servers, Databases, or Applications.
    Customizable and flexible alerting and monitoring of SQL Server Jobs along with Auditing/Revision History for tracking Object changes per User.
    Incorporation of SLAs
    Ability to limit the run-time of a job
    Introduce pluggable calendars, so SQL Server Agent would know NOT to run DBCC on 4th of July! 🙂

  • One more
    Make RCSI available on a session and table level.
    SELECT Column
    FROM Table WITH (SNAPSHOT)
    No more excuses to use the “dirty” NOLOCK trick.

  • RBAC for administrators and users – especially in SSRS to allow the (kerberos) security context of the report user to define access to reports and data

    primary databases on multiple nodes in an availability group similar to exchange DAGs – this would probably also require decoupling the listeners from each host and act more like a proxy that routes connections to databases it is aware is online, which would also be desired and remove some of the need for load balancers in more complicated architectures.

    multi-threaded buffer pool extensions

  • And one more – RETRY. It would make TRY and CATCH much better.

  • Table Partitioning using more than one column would be very useful.

  • Being able to filter stuff out of the log file instead of just filtering for something you’re looking for. i.e. when you turn on Audit Successful Login (as required by some security standards) you’re log file can get swamped (and ours do on most boxes) with “Login Successful for …” but we have no way of filtering these out. We can only filter for something we are looking for but what if we don’t know what we are looking for and we just want rid of all that fluff. Small things but would make my life a lot easier

  • Do you know what, it would be the simple things, little tick boxes like:

    1) “Oh, you’ve configured AOAG/Mirroring, would you like SQL Server to keep the logins and permissions up to date on all members?”
    2) “Oh, you’re configuring a certificate for encrypted connections on a clustered SQL Server. Would you like me to (securely) provision the certificate on the other nodes too, and set permissions correctly for the account running the database engine on the cert?”

  • Either merge DataZen, SSRS and PowerBI or quit

    • SSRS and PowerBI are two totally different reporting tools with a vastly different audience and purpose. I don’t see any benefit in merging them. DataZen can easily be replaced with Power BI, if the Power BI Report Server would support dashboards.

  • A very small ask, but why not a dynamic PIVOT.

    I know that this can be done with dynamic SQL, but we have a PIVOT table operator, so why not allow it to accept an expression for the spreading element instead of a static list.
    The existing implementation feels half-finished to me.

    The first thing that everyone intuitively tries when they start using PIVOT is something like:

    PIVOT(SUM(freight) FOR shipperid IN (SELECT shipperid FROM shipper))

    Then the realisation dawns that “It’s not that sort of IN”.

  • Regular Expressions for filtering and string manipulations! Not everybody is happy with 3rd party SQLCLR solutions.

    • YES, PLEASE!!!

    • YESSSS! I wrote my own SQLCLR library, which worked great… EXCEPT that many clients are moving to hosted Azure or AWS versions of SQL, and many of these hosting companies have CLR integration turned off, and WILL NOT turn it on. And built in “pattern” matching is so limited. It would be SOOOOOOO AWESOME to have RegEx support baked in to the db!
      Pretty please with cherries on top???

  • Well, it seems like a few years ago there was a burst of some love given to SSRS, and then it just dried up… so I’d like them to get back to that, such as:
    – give the designer the ability to move a column in a tablix
    – have the designer indicate where page overflows/spans will occur like Excel does instead of trial and error
    – consistent viewer toolbar (back button in the toolbar disappears on drill through when viewing with URL Access vs through Report Manager)
    – Boolean parameters should be a checkbox not True and False radio buttons (not end-user friendly) – or at least let us specify labels for True and False
    – Let us label the checkbox for NULL value for a parameter rather than “NULL” which is meaningless to 95% of end-users
    – Option for the designer to embed an auto-incrementing version number in the RDL that can be referenced as a built-in field in the report like VS can do for assembly versions
    – In the report viewer, an option to auto collapse the parameter bar after View Report is clicked to maximize screen space

    SSMS – I know it’s separate but…
    – Make Intellisense work for all T-SQL functions – some are useless – for example, for FORMAT() it is
    FORMAT(Param1 tinyint,Param2 nvarchar(1),Param3 nvarchar(1)) RETURNS nvarchar(1)
    gee thanks
    – When I Change Connection in a query pane, if the database I am currently in the query pane exists in the newly selected server, STAY IN THAT DATABASE instead of mindlessly going back to master
    – Add option when right-clicking on an object to do “Script (object) as” to “Append to current query editor window”

    T-SQL
    – Some way to do INSERT INTO #temptable EXEC storedproc, when #temptable doesn’t exist (creates it like SELECT… INTO)
    – Add the column name(s) to the error message “String or binary data would be truncated”. SQL KNOWS THE ANSWER! TELL US!
    – Similarly to above, in error messages like “Error converting data type X to Y”, tell us the column name(s) if available!

  • Better support for continous Integration. Currently, doing CI with SQL Server is quite a pain, especially if you need SSAS (if that’s fair to mention here?). Have a case in which we have both the source server and SSAS in different environments. That means that, for example, if you switch from DEV to SIT, you have the Problem that you have to Switch both source db and SSAS. That means you Need to go in Model.asdatabase (Tabular Model in this case) and alter Connection strings with search and replace with powershell.

  • A trace flag, server config option, or DB level config to disable cursors. ESPECIALLY nested cursors.
    (can you guess what I’m dealing with today?)

  • Dynamic PIVOT, REGEX, PAD/LPAD/RPAD, DISTINCT option on STRING_AGG, FILTER for window functions.

  • Named instance specific to an availability group, so the SQL Browser can provide that listening port instead of needing to provide it in the connection string.

    • Or have the Browser service interpret connections to the AG listener IP (with or without instance name) and supply the AG listener port number. After all, you can see in dm_exec_sessions that connecting to an AG name with the instance name that the IP address is that of the AG listener, but the port is that of the instance itself.

      This shouldn’t even be too difficult to back-port.

  • I know this is a known issue, but heck, I REALLY, REALLY want this. Like a Kid at Christmas want.
    SQL Server needs to allow you to see your errors better. When I have a stored procedure that is trying to catch errors and I use a BEGIN TRY .. END TRY BEGIN CATCH… END CATCH I only get the outermost error number and message. Either that or stop catching and re-throwing some of the system errors. (RESTORE LOG/ DATABASE hmm, hmm)

  • I’d like to see the equivalent of Oracle’s Match_Recognize. I understand you may be to simulate it in T-SQL, but it’s really complex

  • Just ran into something that Oracle supports but had to write something squirrely in SQL Server:
    ORDER BY col NULLS FIRST/LAST

  • Better support for SYNONYMs. As of right now, SYNONYMs are 2nd class citizens for IntelliSense and SSMS.

  • SQL has ALTER TABLE for LOCK_ESCALATION.
    I would like an option that means fail the evil SQL statement trying to hurt my nice DB server.

  • Carsten Saastamoinen
    November 24, 2017 7:06 am

    1. The ability to specify a default filegroup for a schema.
    2. Change Database Audit so it is possible to log parameter values special for the SELECT-statement.
    3. Check-constraint with sub-select.
    4. Conbine Rebuild/Reorganize specifying a percent of fragmentation for rebuild or reorganize.
    5. Dynamic in window-function : ROWS BETWEEN @Preced PRECEDING AND @Follow FOLLOWING.

  • Carsten Saastamoinen
    November 24, 2017 7:32 am

    If I’m inserting 1.000.000 rows into the 2 tables, the first insert runs in 9 seconds the last one in 131 seconds. A big difference when the second tables bucket counts are the ‘correct’ specifications.

    CREATE TABLE dbo.Person_Hash
    (
    ID INT NOT NULL IDENTITY
    CONSTRAINT PK_Person_Hash PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000) ,
    Name VARCHAR(30) COLLATE Latin1_General_BIN2 NOT NULL,
    Street VARCHAR (30) COLLATE Latin1_General_BIN2 NOT NULL,
    Zipcode SMALLINT NOT NULL,
    Sex CHAR(1) COLLATE Latin1_General_BIN2 NOT NULL,

    INDEX hash_index_Person_Name HASH (Name) WITH (BUCKET_COUNT = 1000000),
    INDEX hash_index_Person_Street HASH (Street) WITH (BUCKET_COUNT = 1000000),
    INDEX hash_index_Person_Sex HASH (Sex) WITH (BUCKET_COUNT = 64),
    ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
    GO
    CREATE TABLE dbo.Person_Hash
    (
    ID INT NOT NULL IDENTITY
    CONSTRAINT PK_Person_bc PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 3000000),
    Name VARCHAR(30) COLLATE Latin1_General_BIN2 NOT NULL,
    Street VARCHAR (30) COLLATE Latin1_General_BIN2 NOT NULL,
    Zipcode SMALLINT NOT NULL,
    Sex CHAR(1) COLLATE Latin1_General_BIN2 NOT NULL,

    INDEX hash_index_Person_Name HASH (Name) WITH (BUCKET_COUNT = 3000),
    INDEX hash_index_Person_Street HASH (Street) WITH (BUCKET_COUNT = 100),
    INDEX hash_index_Person_Sex HASH (Sex) WITH (BUCKET_COUNT = 5),
    ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);

  • Carsten Saastamoinen
    November 24, 2017 7:57 am

    just a wish that this ‘error’ will be corrected in the next version – has been a problem in the earlier versions and still a problem in vers 2017.

  • Ok, a “SQL Imprint” mode…. Basically, a system whereby you can install a fresh SQL instance, restore a database onto it, and then drop every user and permission from the restored database.
    You turn the mode on, and then fire off your application. Every “deny” is changed to a “grant” of the corresponding type.. so “select denied” becomes a “grant select on X to Y”… “delete denied” does the same….
    Once you turn off this mode 2 things happen:
    1) A script is generated of all the permissions you have just created.
    2) You are asked if you wish to keep the current configuration. You can then retain the new permissions, or rollback, allowing your application to execute with literally the bare minimal permissions required.

  • I wish they would make a way to have Management Studio refresh it’s suggestions and known table names. I’ll create a table in one window and then open another one to actually manipulate it for my scripts, and all of the names will have red squigglies under them for the rest of the day. I use red gate and am able to get my intellisense through their too refreshed so I can still get the proper suggestions, but it’s annoying.

  • I would like to implement GreC Graph enriched by Cubs. For IoT implementation.
    https://tel.archives-ouvertes.fr/tel-01443945/document

  • I would love to have a true duration data type; bounded or unbounded on either side, and have the ability to set indexes and unique keys on the value. This should be available for date, datetime, and time data types

  • Job Agent Improvements:
    * Be able to run specific job steps (not just the step to start at).
    * Run another job when one job finishes successfully (or maybe only if it fails).
    * Only run a job when file exists or skip job run.

  • The top 3 features I would love to see in SQL Server 2018:
    1. Persistent Hekaton in-memory database using Intel XPoint Optane DIMMs with Intel’s upcoming Xeon platform in 2H 2018.
    2. Time series delta compression in persistent Hekaton(storing only the delta in columnstore. It is implemented in KDB+, Vertica, and Clickhouse, which is the reason why they are so fast and compression ratio is 10:1 or even more)
    3. FP16 storage type, for machine learning, so obvious you want this storage type in SQL Server 2018 so machine learning pipelines can do their magic using Hekaton

  • Ted Williamson
    February 8, 2018 3:19 pm

    I can believe we forgot: DEADLOCK trapping in TRY…CATCH blocks!!!

  • I would like to be able to explicitly say that I want multiple query plans for a query or stored procedure. Something like:


    — The following annoying conditions are all too common in the code code I’ve inherited,
    — usually backing up some kind of web page that does a generic search
    where
    (xaction.TransactionNumber = @TransNo or @TransNo is null)
    or (acct.AccountNumber = @AcctNo or @AcctNo is null)
    or (cust.LastName = @LName or @LName is null)
    … — lots more like this
    — Imagine having something like this:
    option (multiqueryplan (
    case
    when @TransNo is not null then 1 option ([more options])
    when @AcctNo is not null then 2 …
    when @LName is not null then 3 …

    end));

    The idea here is that if the transaction number is specified, you’d use a query plan called #1 (in Query Store or other communications). If not, but the account number is specified, you’d use a query plan called #2. And so on.

    Yes, yes, I know. This goes against the direction SQL Server is going, where all the smarts are in the optimizer. But no, adaptive query processing isn’t always going to be the answer.

    And yes, yes, I know. You can just duplicate the query (ugh). And you can duplicate a lot less with inline table-valued functions. There are other ways. Still … this is the one I’d like.

    Well, OK, maybe this and a way to tell SQL Server to ignore its own cardinality estimator and use my estimates (for a fixed number of rows, a maximum number of rows, a percentage to meet a condition, etc.) instead.

    Sorry for coming late to the party!

  • After today’s experiences with SSMS 17.5 FULL SUPPORT for non standard collations especially case sensitive.

  • I would like to see support for SPARQL (this is a query language a bit similar to SQL that works with RDF triples, used by DBpedia and similar large data projects). Either RDF triples could be stored in SQL Server itself using the new graph table types, or SQL Server would allow SPARQL queries against RDF endpoints. The point of this is RDF triples store a lot of useful data, and it would be great to be able to load this into SQL Server using SPARQL which is a standard (note DB2 and Oracle already support SPARQL).

    A quick and easy way of doing this in SQL Server would be to extend OPENROWSET to work with RDF enspoints using SPARQL syntax in OPENROWSET.

  • Andrew Tobin
    March 3, 2018 5:28 am

    Based on what a coworker wants to do, and I apparently can’t do… Cross-database Ownership Chaining on Views.
    We’d like to run dynamic-ish SELECT on views, and writes through stored procedures, but that’s a no go.

  • Thor van der Merwe
    April 3, 2018 6:34 am

    SSMS should have a better way to manage connections to High Availability replica in SSMS.
    I would to supply the listener name, then get options to select read-intent or anything pertinent to that AG.
    SSMS also needs to clearly display which replica the connection is on e.g. red “light” for primary and green for secondary.

  • Paul Woudwijk
    April 23, 2018 8:41 am

    1) Fractal indexes. ( way faster updates )
    2) More and beter integration with dotNet. ( from management studio write C# sp’s and functions )
    3) batches in queries : DELETE FROM ….. WITH ( batchsize = 50000); this would in some cases the log file smaller and not block log backups
    4) ability to create custum datatypes with custum calculations. eg matrix calculations

  • Ronald Klimaszewski
    June 26, 2018 11:29 am

    If a string is not found in charindex(), instead of returning a static 0 allow for a custom entry for function such as LEN(). This is different from STRING_SPLIT in that it would return a scalar.

    DECLARE @s nvarchar(50) = ‘MYSERVER\MYINSTANCE’
    SELECT LEFT(@s,CHARINDEX(‘\’,@s,1,LEN(@S))

    • Erik Darling
      June 26, 2018 1:11 pm

      Ronald, you can mimic that with built in functions now:

      DECLARE @s nvarchar(50) = 'MYSERVER\MYINSTANCE'
      SELECT LEFT(@s, ISNULL(NULLIF(CHARINDEX('/',@s), 0), LEN(@s)))

      It’s a bit clunky, yeah. Heh.

  • Yassine Elouati
    September 11, 2018 3:55 pm

    For Goodness sake, when will ALWAYSON have the option to sync Master and MSDB!? Some of us manage a large number of servers. This is a HUGE headache to manage. I am happy the distribution database is taken care of, despite the limitations.

    • Use SQL multiple server administration and scaleout ssis.

      I use this to check if a node is writable:

      DECLARE @CurrentDB VARCHAR(128) = DB_NAME()

      DECLARE @SQL1 VARCHAR(2000)
      SELECT @SQL1 = ‘IF (SELECT sys.fn_hadr_is_primary_replica(”’ + @CurrentDB + ”’)) = 1
      BEGIN
      PRINT ”THIS ‘ + @CurrentDB + ‘ COPY IS THE PRIMARY REPLICA”;
      END;


      IF (SELECT sys.fn_hadr_is_primary_replica(”’ + @CurrentDB + ”’)) IS NULL
      BEGIN
      PRINT ”’ + @CurrentDB + ‘ IS NOT AN AAG REPLICA. CONTINUE JOB.”;
      END;
      IF (SELECT sys.fn_hadr_is_primary_replica(”’ + @CurrentDB + ”’)) = 0
      BEGIN
      PRINT ”THIS ‘ + @CurrentDB + ‘ COPY IS A SECONDARY REPLICA. EXITING JOB WITH SUCCESS.”;
      DECLARE @JobID uniqueidentifier = N”$(ESCAPE_NONE(JOBID))”
      EXEC msdb.dbo.sp_stop_job @job_id = @JobID;
      END;

      –‘

      EXEC (@SQL1);

      • My MSX server is a failover cluster instance and I keep the SSIS catalog in an AAG. I keep a warm management server offsite that contains the management FCI in an AAG and use the SSIS replicate jobs task to the remote node.

        • Yassine Elouati
          July 6, 2019 12:29 pm

          Thanks for your comment. I have many instances. I use a very similar technique for jobs and use DBATools to sync objects (Master/Jobs). That said, the solution is not complete. Not all of Master’s information transfers over, and it requires management overhead, exacerbated when you are responsible for MANY clusters with many nodes, sync and async. SQL 2019 is promised to address this issue, I cannot wait.

  • Just got an alert there were posts on this, have some more thoughts:

    Add at least a bulk logging option, if not some new logging option in AAGs, that allow for minimal or simple-like logging within aag or replicated databases. Maintaining logs backup jobs that immediately delete trn files to solely control log growth on reporting or data staging databases is getting extremely old for databases we don’t even back up.

    For God’s sake improve the configuration process for HA replication subscribers and publishers. Talk about a nightmare.

    Improve/expand multi-threaded inserts

  • Steven J Neumersky
    July 6, 2019 9:17 am

    No question two things…..application versioned temporal tables from ANSI sql 2011 standard and row pattern recognition from ANSI SQL 2016 standard

  • Yassine Elouati
    July 8, 2019 9:34 am

    I saw it in MS presentations, but here is a link that ought to confirm it:
    “In SQL Server 2019, Microsoft is correcting this by addressing both Master and MSDB, which will simplify the administration of availability groups. ”
    https://redmondmag.com/articles/2018/12/03/sql-server-2019-at-pass-summit.aspx

    • I’d be really careful taking the word of third parties. If it’s not documented yet, it isn’t supported – so it may not make the cut for release.

  • A SQL monitoring tool like Oracles Grid Control that can monitor history for a least a week. If I am not mistaken this is sorely needed. Microsoft did try to do a little operational data warehouse that users were to fuss with to get to work that kind of stored history of SQL statements executed on the system and their performance numbers or DMVs at time, but took too much monkey business to convince a boss to take the time to employ. We need an activity monitor that can go back a week at least. I still haven’t seen anything for SQL server like grid control except for grid control but haven’t used it for SQL server so I can’t speak to it. How do I answer the question that something happened with the SQL server database last night when I was asleep other than looking at log files and such?

    • Yeah, that’s a great question. I have no idea how you could possibly find out what happened with SQL Server. If someone could figure it out, wow, they could make tons of money.

      I should probably send that tip to my friends at Quest, Idera, SentryOne, Redgate, and SQLgrease. Who knows, they might even be able to make money building something like that!

      • Steven J Neumersky
        July 20, 2019 12:11 pm

        I also remember Jaime MacLennan proposing some ideas using the SQL Server Data Mining Engine within ETL to do predictive analytics, but that’s from 2008. I’ve done some ETpL (Extract, Transform, Predict, Load) with it in order to PREDICT a server crash…..as opposed to looking back in the log(s). However, compared to what exists with Power BI and Azure ML it is now perhaps akin to using Rocks and Sticks.

  • Steven J Neumersky
    July 20, 2019 12:05 pm

    I remember Paul Nielsen dedicating a chapter to MDW (Management Data Warehouse). I roll my own as needed to monitor query statistics and index usages by employee, but I shouldn’t have to…..

  • Steven J Neumersky
    July 20, 2019 12:11 pm

    I also remember Jaime MacLennan proposing some ideas using the SQL Server Data Mining Engine within ETL to do predictive analytics, but that’s from 2008. I’ve done some ETpL (Extract, Transform, Predict, Load) with it in order to PREDICT a server crash…..as opposed to looking back in the log(s). However, compared to what exists with Power BI and Azure ML it is now perhaps akin to using Rocks and Sticks.

Menu
{"cart_token":"","hash":"","cart_data":""}