Using SQL Server’s Table Valued Parameters

Table valued parameters (TVPs) are nothing new – they were introduced in SQL Server 2008. TVPs are a great way to get data into SQL Server, and yet very few developers use them.

Getting Started with TVPs

In order to get started, we need to create a user defined type. This type is necessary so we have a well defined interface between SQL Server and the outside world – you can just pass a bunch of junk in as a parameter to SQL Server… well, you could, but that’s called XML.

Let’s create that user defined type:

Now we can create variables using the dbo.SalesPersonTerritory type whenever we want. It’s just this easy:

Using Table Valued Parameters With Stored Procedures

Having a special table type is really convenient, but it doesn’t help if you can’t use it, right? Let’s assume that you’re calling a stored procedure and you’re sick of joining lists of strings on the client and then splitting them apart on the server. What you really need is the ability to pass a fully fledged table across the wire and into SQL Server.

This is relatively easy to accomplish. From C#, the code would look something like this:

That’s really all there is to it.

C# Tips for Table Valued Parameters

There are a few other things that developers can do to make their life easier when working with table valued parameters. A DataTable, just like a table in SQL Server, should have types declared for all columns. While it’s easy enough for developers to create these tables on the fly, that won’t cut it in production code – boiler plate code means that people can make mistakes.

To make life easier, developers can create methods in their application to make it easier to work with table valued parameters. We can create a special chunk of code that will make it easy for developers to instantiate and use a DataTable that matches up with the table valued parameter.

Seasoned developers will even create methods that let them drop a list of objects straight into an appropriate DataTable. There are many different ways to build convenience methods into code and make it easy for development teams to work with TVPs.

For simple data type matching, this works well. If you’re dealing with more complex data types, you’ll want to check out SQL-CLR Type Mapping to make sure you get the right data type. You’ll notice that some datatypes (varchar for instance) have no direct corollary in the .NET Framework. Sometimes you just have to lose some fidelity – make wise decisions, it gets crazy out there.

These same techniques can be used with a string of ad hoc SQL, too. TVPs aren’t limited to stored procedures, they can be used anywhere that you are executing parameterized code.

Gotchas of TVPs

There are two big gotchas with TVPs.

First: the table variable that comes in as a table valued parameter cannot be changed. You’re stuck with whatever values show up. No inserts, updates, or deletes can be applied.

Second: table valued parameters are still table variables – they get terrible cardinality estimates.

We can get around both of these problems with the same technique – copy the contents of the TVP into a temp table. Although it adds an extra step to using the TVP, I’ve found that copying the contents of the TVP to a temporary table lead to better execution plans and much less confusion during development and troubleshooting.

Summary – Using Table Valued Parameters isn’t Hard.

TVPs aren’t difficult to use. They’re just different and require a different mindset. By using TVPs, developers can pass many rows into a stored procedure and create far more complex logic and behavior that is possible using only single parameter values.

Previous Post
Statistics Matter on Temp Tables, Too
Next Post
PREEMPTIVE_OS_WRITEFILEGATHER and Instant File Initialization (IFI)

34 Comments. Leave new

  • Hi Jeremiah,
    The code you have provided won’t work nor compile. Here is an adjusted version that I was able to execute.

    string cnString = @”data source=.;initial catalog=TVPTester;user id=tvptester;password=tvptester;”;
    using (var connection = new SqlConnection(cnString))
    using (SqlCommand cmd = new SqlCommand(“SELECT * FROM @tvp;”, connection))

    var pList = new SqlParameter(“@tvp”, SqlDbType.Structured);
    pList.TypeName = “dbo.SalesPersonTerritory”;
    pList.Value = SalesPersonTerritoryTable();


    using (var dr = cmd.ExecuteReader())
    while (dr.Read())

    Thanks for the post. I had no idea you could do this.

  • How would this work with Report Server?

  • Good article. Hopefully it will give me insight into determining what is causing the following error when using a table valued param. I’m thinking putting the data into a temp table is a good start in troubleshooting.

    “The data for table-valued parameter XXXXX doesn’t conform to the table type of the parameter.”

    • Jeremiah,

      I’ve tried to implement the above, though the SQL portion of things won’t cooperate. Using your “type” name as an example, I’ve tried to create the following stored procedure:

      @LIST as dbo.SalesPersonTerritory

      though I get the following error:

      Msg 2715, Level 16, State 3, Procedure #TEST, Line 2
      Column, parameter, or variable #1: Cannot find data type dbo.SalesPersonTerritory.
      Parameter or variable ‘@LIST’ has an invalid data type.
      Msg 1087, Level 16, State 1, Procedure #TEST, Line 8
      Must declare the table variable “@LIST”.

      I ran the create script, and the datatype does appear when I query sys.types…

      FYI, when I try to use it as a declared variable it works fine… it’s only in the stored procedure that I’m getting errors.

      What am I doing wrong?


      • Jeremiah,

        I discovered that it would buy it if I did it as a regular stored procedure, though the temp portion is what threw it off… why would a temp proc not recognize a user defined type?


        • I suspect it’s because the temporary stored procedure is created within tempdb. That is, after all, where temp tables live. You’d have to three part name the UDT since UDTs are local to a specific database.

      • Assuming you have actually created the type dbo.SalesPersonTerritory correctly, you left out the READONLY keyword in the parameter list of the stored proc. This is mandatory. It should look like

        @LIST dbo.SalesPersonTerritory READONLY

  • Thanks for the article & user tips especially the C# snippet.

    One thing I would add about User-defined table types as TVPs is that they implicitly default to an empty table. When used as a parameter for a stored procedure, attempting to set them = NULL as a default makes no sense.

    It is then possible to test if the TVP has been passed in empty (or not at all) by checking IF EXISTS any rows.

  • I’ve been using TVPs for a little while now, and one of the really neat tricks I’ve found that isn’t well-known is the use of the SqlDataAdapter.FillSchema method ( to add correctly-typed columns for a table (or type) to a DataTable. By using code like the example below, the .NET framework will populate a DataTable with the set of columns for the table type including constraints for not null and string length. The constraints are particularly useful for catching individual rows that might cause entire bulk inserts or updates to fail since DataTable.Rows.Add will throw an exception if such constraints are violated. To keep from needing an extra round trip to the server to retrieve the schema for every DataTable, a DataTable with the schema and no entries can be cached and DataTable.Clone can be used to make new tables as necessary.

    Col1 INT NOT NULL,
    Col2 INT NOT NULL,


    string cnString = @”data source=.;initial catalog=TVPTester;user id=tvptester;password=tvptester;”;

    using (SqlConnection connection = new SqlConnection(cnString))
    using (DataTable dataTable = GetDataTableForType(connection, “MyTableType”))
    DataRow dataRow = dataTable.NewRow();
    dataRow[“Col1”] = 15;
    dataRow[“Col2”] = 10;
    dataRow[“Col3”] = “MyString”;

    // if any schema constraints are not met, an exception will be thrown out of this call

    // push the data using a SqlCommand and a Structured parameter


    const string getSchemaCommand = “DECLARE @schemaTable [{0}]; SELECT * FROM @schemaTable”;
    static readonly Dictionary schemaTableCache = new Dictionary(StringComparer.InvariantCultureIgnoreCase);

    static DataTable GetDataTableForType(SqlConnection connection, string typeName)
    DataTable schemaTable;

    // check to see if the table schema is already cached
    if (!schemaTableCache.TryGetValue(typeName, out schemaTable))
    schemaTable = new DataTable(typeName);

    // retrieve the table schema from the database; the adapter will open the connection if it is not
    // already open and close it after completing if it was initially closed
    using (SqlCommand getTableCommand = new SqlCommand(string.Format(getSchemaCommand, typeName), connection))
    using (SqlDataAdapter adapter = new SqlDataAdapter(getTableCommand))
    // Source = don’t apply adapter transformations (none are defined on this adapter)
    adapter.FillSchema(schemaTable, SchemaType.Source);

    // cache the schema
    schemaTableCache[typeName] = schemaTable;

    // return a clone of the schema reference table
    return schemaTable.Clone();

  • Great post!

    With this sample I always get an empty table in the procedure, I fixed it by forcing the command to be a stored procedure call:

    cmd.CommandType = CommandType.StoredProcedure;

    It might help someone else!


  • Don Schaeffer
    July 8, 2015 3:14 pm

    A trace reveals that execution from .net code generates an insert into a variable of the table type for each row in the table being passed. Example:

    declare @p1 rvfwa.AmountPaidType
    insert into @p1 values(N’3′,N’0′,N’0′,N’3.750000′,N’2′,N’7.037300000000′,N’2′,N’1.820000′,N’1′,N’0′,N’0′)
    insert into @p1 values(N’3′,N’13707′,N’0′,N’3.750000′,N’2′,N’7.037300000000′,N’2′,N’1.820000′,N’1′,N’0′,N’0′)

    When passing thousands of rows this is none too fast (30 seconds for 10,000 rows). I saw somewhere that the data is bulk inserted into the table variable; this appears not to be the case.

    • Don – if you’re using Profiler traces, be aware that it doesn’t necessarily show exactly the same thing your app is doing in cases like this.

      • Unfortunately, I can confirm that .NET natively INSERTs rows via separate INSERTs (instead of one big). It is valid for both most common methods of TVP creation (via DataTable and via SqlMetaData).
        We were suffering huge latch contention because of that.

  • Don Schaeffer
    July 24, 2015 11:52 am

    We just encountered another problem with table parameters. Our application allowed a user to generate more than a million items that populated the a table parameter passed to a stored proc. The result was a number of error 701 messages: “There is insufficient system memory in resource pool ‘default’ to run this query., followed by “FAIL_PAGE_ALLOCATION”, and a bunch of “* is marked for unload due to memory pressure”.

    My advice is use use table parameters with a bit of caution and to keep the number of records passed relatively small, say less than 1000, and certainly less than 100,000.

  • Abhaysingh N. Rajpurohit
    November 25, 2015 7:19 am

    namespace MyDemoApplication
    using EntityFrameworkExtras.EF6;
    /// Represents User Defined Type for database
    public class UserDefinedType
    public int ID { get; set; }

    public int Value { get; set; }

    Hey, In my above example, Instead of using Number ( [UserDefinedTableTypeColumn(2)]), can we use column name of UDT

    • I don’t know. You should post this question on StackOverflow or search the MSDN documentation for Entity Framework.

      • Hi, I have created a type which will have 1 record passed to a stored procedure
        Procedure declaration is as follows
        alter PROCEDURE [wsInterface].[spErpMesProductionOrder]
        @TVP dbo.ErpMesProductionOrderHdrType READONLY

        –Why am I getting the error “The parameter @TVP can not be declared readonly since it is not a table –valued parameter

        One column in the type is ProductionOrder and I want to validate the value before doing anything else as follows

        if (@TVP.ProductionOrder = 0 ) ‘Do some validation’

        –Does the @TVP.ProductionOrder not hold the value at this point.

  • Hi guys, I am wondering about cardinality for table type parameters. I have found an execplan in plan cache of type “Prepared”, “Compiled Plan” for the following query (anonymized):

    (@tvpID [dbo].[T_IdListDataType] READONLY)
    WHERE Table1.FKID IN(SELECT tvpID.Id FROM @tvp tvpID)

    The table type parameter definition (Clustered Primary Key):

    CREATE TYPE [dbo].[T_IdListDataType] AS TABLE(
    [Id] [int] NOT NULL

    My opinion was that the cost based optimizer always expects one row for table variables. But when I take a look into the execution plan I can see it estimated 19369 rows for the Clusterd Index Scan at the table variable:

    Does someone know why? I do not have a problem with this issue, because I do want the optimizer to estimate so many rows, because the query is called with that many rows in real. But I really wonder in which cases the engine works like that.

    • Oh, it did not accept the xml:

      RelOp NodeId=”2″ PhysicalOp=”Clustered Index Scan” LogicalOp=”Clustered Index Scan” EstimateRows=”19369″

      SQLServer 2008 SP1

    • Fritz – if you don’t have a problem, I’d focus on things where you have problems.

  • Hi all,
    is there any query to get the structure of table valued parameter(TVP)
    Please let me know if any


  • Hello,
    I am using table variable in the context of a report web based application to allow some (a lot) flexibility.
    I created a system that split all data into different temptable, and uses a decision matrix to call the procedure I need with the different parameters and solution. This allows me to create different set of procédures that are optimized and stored in sqlserver. In function or the presence or absence of parameters, the code calls the right procedure. Each procedure uses some table variable to allow join on subset of data and reduce the number of records to be processed by the calculation.
    This is working very well but sometimes I have more data coming into the table and sql server unfortunately creates bad execution plan according to the fact that it thinks that the table will have only one record….
    When having several milions records, a scan with a hash match on top, kills the performances and the application goes to timeout…
    I tried local table variable with clustered key, I tried to define my table type with clustered index key,… but whatever I do, sql server seems to ignore the number of records it just imported into the table.
    The table type is convenient as it permits me to use sp_executesql .
    I know that we can use another architecture, I know I can use ssas or reporting,… but I don’t have the choice right now and need to work with that solution…
    Do you have an idea on how I could use those table variable and let know sqlserver how much record is in them ?
    I will try the temp table, but I would like to know if there are more nice solutions.

    • I forgot to add that I use Sqlserver 2014 SP2.
      I just found a way by using the traceflag 2453. The execution plan now shows accurate row count and the query executes in 4 sec instead of nearly 3 minutes. I have a based set of 2 millions rows, so now the timing are acceptable for a test server…
      If you have other solution, let me know, I am curious and more I need to gain every sec I can.

      • I also needed to allow my user application to impersonate a sysadmin user for the time being of the procedure call…
        Is there another solution to allow the use of the dbcc traceon command without giving very high privilege to the user ? This is quite stupid. Just allow the fix to some user as the code is to be used by any user…

  • You suggest that “table valued parameters are still table variables – they get terrible cardinality estimates so copy the contents of the TVP into a temp table. Although it adds an extra step to using the TVP, it leads to better execution plans and much less confusion during development and troubleshooting.”

    I’m a relative noob creating a Function that takes UDTs as parameters — but found that functions can’t use temp tables (only table vars). Is there still a benefit (i.e. in a function) to copy the UDTs into a table variable and then using the table var values in the Query?

  • I am passing table type parameter to child SP which is on another database on save server.
    It gives 50000, level 16,..2 incompatibility on table types.
    I have save table type in both database .

    Any suggetion on this?