Tuesday, October 9, 2012

Dtexec error: The version number in the package is not val. Package migration from version 6 to version 3 failed with errorid


Error: I got the following error when I tried executing SSIS package from dtexec.

Microsoft (R) SQL Server Execute Package Utility

Version 10.0.2531.0 for 64-bit

Copyright (C) Microsoft Corp 1984-2005. All rights reserved.

Started: 8:10:18 AM

Error: 2012-10-10 08:10:18.97

Code: 0xC001700A

Source:

Description: The version number in the package is not valid. The version numb

er cannot be greater than current version number.

End Error

Error: 2012-10-10 08:10:18.98

Code: 0xC0016020

Source:

Description: Package migration from version 6 to version 3 failed with error

0xC001700A "The version number in the package is not valid. The version number c

annot be greater than current version number.".

End Error

Error: 2012-10-10 08:10:18.98

Code: 0xC0010018

Source:

Description: Error loading value "<DTS:Property xmlns:DTS="www.microsoft.com/

SqlServer/Dts" DTS:Name="PackageFormatVersion">6</DTS:Property>" from node "DTS:

Property".

End Error

Could not load package "ExecuteSql_Sample.dtsx" because of error 0xC0010014.

Description: The package failed to load due to error 0xC0010014 "One or more err

or occurred. There should be more specific errors preceding this one that explai

ns the details of the errors. This message is used as a return value from functi

ons that encounter errors.". This occurs when CPackage::LoadFromXML fails.

Source:

 

Solution:

This error happened because I developed the package using SqlServerDataTools (BIDS of 2012) and the error shows it is trying to downgrade to a lower version

Solution for this is to execute the dtsexec from the version that you developed it with. In my case it is Sql Server 2012, so the path of the dtexec that you need to use is C:\Program Files\Microsoft SQL Server\110\DTS\Binn. You can update the Path environment variable to move this to the foremost among other dtexec paths

Sunday, October 7, 2012

Properties window BackUp Database SSIS

Today while trying to refresh my knowledge on SSIS, I tried creating a sample SSIS package with 'BackUp Database' task

One silly issue that I encountered is when I open up the properties window of the task, I find that the buttons at the bottom to finish setting properties is exceeding the screen size and the buttons are getting hidden below the task bar, on my new 18.5 inch LED.

I tried to check whether there is a way out to resize or move the window but no, I can't. So I had to do this. Hide your task bar and now click on the buttons to finish your configuration.

How you hide the task bar: Right click on task bar -> Select properties -> Uncheck lock taskbar & check the Auto-hide the task bar -> Apply -> OK. Now you should be able to click on your buttons and complete configuration for the Back database task.

I agree I could do this by resetting my screen resolution which I do not want to do, as I am quite comfortable with my current one. Hope Microsoft provides a resizable window in the next version of SQL Server data tools (BIDs earlier) :)

Saturday, October 6, 2012

Package fails validation

If your package does not execute and you find the following errors in validation of package, it is that you are trying to execute a the package on 64-bit machine, which is not supported for some of the SSIS tasks like ExcelSource and Excel Destination


[Excel Destination [23]] Error: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER.  The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0209303.  There may be error messages posted before this with more information on why the AcquireConnection method call failed.
[SSIS.Pipeline] Error: Excel Destination failed validation and returned error code 0xC020801C.
[Connection manager "Excel Connection Manager"] Error: The requested OLE DB provider Microsoft.Jet.OLEDB.4.0 is not registered. If the 64-bit driver is not installed, run the package in 32-bit mode. Error code: 0x00000000.
An OLE DB record is available.  Source: "Microsoft OLE DB Service Components"  Hresult: 0x80040154  Description: "Class not registered".


Resolution:
You can execute the package in 32-bit at the Progress tab suggests. For this go to Project Menu -> Select <Project> properties -> Configuration properties ->Debugging -> Debug Options -> Run64bitRuntime. By default this option is true on a 64-bit machine. Set this to false.

I found this link more helpful

Saturday, September 29, 2012

Set build order for projects in a solution

From the properties of the solution select depedencies and for each project in your solution you specify the dependent projects. Once you specify the dependencies VisualStudio would set the build order automatically for you.

Tuesday, September 25, 2012

The conversion of a varchar data type to a smalldatetime data type resulted in an out-of-range value.


You get this error because you are trying to query with a date not supported by SQL Servers SmallDateTime datatype.

Note that SmallDateTime only allows dates between

January 1, 1900, through June 6, 2079

When you required dates beyond this range to be stored off in the DataBase specify the Datatype for the field in the table as DateTime instead of SmallDatetime


 

System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.

You get this exception when the datetime in your where clause is out of the range specified in error.
This is because SqlServer only stores the dates between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59  as the error specifies

Sunday, September 23, 2012

SSIS package hangs

I got through a seemingly strange issue in production some time back. Its that a Sql job scheduled to run daily on the prod server hung every now and then. The SSIS package never completed execution even days after when it hung. So this had to be carefully monitored and restarted whenever it got hung. Looking at SSIS logs it shows only validation message and no execution at all.
After a lot of research and working with the DBA, it turned out to be that the package tries to truncate and reload some tables. There was also a report which was trying to fetch data from the same table. The package hangs if the report executes the same time that the package started, since the report locks out the table that the package tries to truncate which never happens and enters a deadlock.

Saturday, September 22, 2012

Querying from an xml column Sql server


For instance you want to query the Names of all customers from  a customers column which is an xml, you need to provide the xpath of the particular element in xml from the xml in query()

 select CustomerName =<XmlColumn>.query('data(//Name)')

   from <Customer>

(400) Bad Request with WCF Data Services


The remote server returned an error: (400) Bad Request.

Getting this error when you call the SaveChanges() of your Entity context through your WCF Data Service?

The remote server returned an error: (400) Bad Request.

InnerException is null. Response object's Statuscode: System.Net.HttpStatusCode.BadRequest

No further information available in exception object.

 

Some context on the exception:

I faced this error because of the insufficient request size in my service config file. After a log of debugging and trail and error, I found that the huge xml contained is one of the entities is actually causing the issue. Further found that the SaveChanges succeeds when the xml size is small.Thus able to determine the issue was with request size

 

Resolution:

Increase the request size in the config file of the project where your data service recides and this should resolve the issue

<binding name="DataServiceBinding" ..... maxBufferSize="2147483647" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" .......>

Create missing indexes to improve your Database performance


How you create missing indexes to improve your Database performance

 

Step-1:

Query to identify the indexes which are missing. SqlServer provides you with DMVs(DynamicManagementViews) which help you get performance statistics of your DB.

Execute following query:

SELECT user_seeks * avg_total_user_cost * ( avg_user_impact * 0.01 )

 AS [index_advantage] ,

 migs.last_user_seek ,

 mid.[statement] AS [Database.Schema.Table] ,

 mid.equality_columns ,

 mid.inequality_columns ,

 mid.included_columns ,

 migs.unique_compiles ,

 migs.user_seeks ,

 migs.avg_total_user_cost ,

 migs.avg_user_impact

FROM sys.dm_db_missing_index_group_stats AS migs WITH ( NOLOCK )

 INNER JOIN sys.dm_db_missing_index_groups AS mig WITH ( NOLOCK )

 ON migs.group_handle = mig.index_group_handle

 INNER JOIN sys.dm_db_missing_index_details AS mid WITH ( NOLOCK )

 ON mig.index_handle = mid.index_handle

WHERE mid.database_id = DB_ID()

ORDER BY index_advantage DESC ;

 

Step-2:

Analyse which indexes you need to create

Identify which storeprocs and/or applications are querying from the tables that these indexes should be created on. You need to carefully judge and prioritize which indexes returned from the previous query would actually help you. One thing you need to keep in mind is the read/write ratio. If you have multiple applications one just reading from a table and other that writes much more as compared to the number of times the first one reads, it might be a better decision to not create the index or create the index dynamically before  the first application  is using the table and drop it after it completes the required queries from the table

[If you are working with a database where you are not sure what/how many applications are using the tables which the indexes are referring to, you might want to try this trick. Create all indexes in the desc order of priority from the results of the above query suggests. You may just create  indexes until the index advantage that you think will help improve DB performance. Now execute all the unit tests from all the applications. Now you can safely remove the ones which do not have a good read/write ratio. To ensure that you have retained the right indexes you can capture a trace from prod and execute it on test or execute it from DatabaseTuningAdvisor for better analysis]

Remember too many indexes might decrease than improve the performance of DB

 

Step-3:

Create the indexes from the missing indexes results as per the below format picking the ones that you want and ignoring the ones that you do not require.

 CREATE NONCLUSTERED INDEX  <IX_<TableName>_<columnnames>>

     ON <TableName> (<EqualityColumns>,<InequalityColumns>)

     INCLUDE (<IncludedColumns>)

 

Step-4:

Observe the performance of created indexes. Drop any redundant/unuseful indexes that you created.

After a period of time that you think all the regular load from applications had hit the database, get the statistics for the indexes you created.

Kindly note that to trust the results of the below queries your DB server  should have been restarted long back and you server should have received appropriate load, as these DMVs get you only the data from last restart of the server

 Execute following queries with the indexes you created

 

declare @dbid int

select @dbid = db_id()

 

select objectname=object_name(s.object_id), indexname=i.name, i.index_id

            , reads=range_scan_count + singleton_lookup_count

            , 'leaf_writes'=leaf_insert_count+leaf_update_count+ leaf_delete_count

            , 'leaf_page_splits' = leaf_allocation_count

            , 'nonleaf_writes'=nonleaf_insert_count + nonleaf_update_count + nonleaf_delete_count

            , 'nonleaf_page_splits' = nonleaf_allocation_count

from sys.dm_db_index_operational_stats (@dbid,NULL,NULL,NULL) s,

                        sys.indexes i

where objectproperty(s.object_id,'IsUserTable') = 1

and i.object_id = s.object_id

and i.index_id = s.index_id

and i.name in (<Index names you created>)

order by reads desc, leaf_writes, nonleaf_writes

order by indexname

 

 

select objectname=object_name(s.object_id), indexname=i.name, i.index_id

                        ,reads=user_seeks + user_scans + user_lookups

                        ,writes =  user_updates

from sys.dm_db_index_usage_stats s,

            sys.indexes i

where objectproperty(s.object_id,'IsUserTable') = 1

and s.object_id = i.object_id

and i.index_id = s.index_id

and s.database_id = @dbid

and i.name in (<Index names you created>)

order by reads desc

order by indexname

go

 

Step-5:

Compare the results of the two queries  above to verify that for all the create indexes reads are  more as  compared to the writes/updates. For most of the indexes the above two queries return a similar ratio of read/write. For very few that you find conflicting results relook into the indexes for which you have conflicting results. Make changes to those indexes as per your knowledge of the tables  and its columns  and try reanalyse these indexes.

 

Step-6:

Drop the indexes that you find the statistics DMVs show higher updates as compared to reads.

 

Step-7:

When you add indexes that are required it would be a better idea to look into the indexes which are no more being used and remove them. Following query gives statistics of indexes which are poorly performing. Now delete the indexes but make sure you have done the required analysis before deleting them. For instance, there might be a service/job/report scheduled to run once everymonth and your server might have restarting just two weeks back, in case which the following query would not show the indexes that the service/job/report uses.

SELECT TableName = OBJECT_NAME(s.[object_id]),

       SchemaName=SCHEMA_NAME(o.[schema_id])

        ,IndexName = i.name

        ,user_updates   

        ,i.is_primary_key

        ,OBJECTPROPERTY(s.[object_id], 'IsMsShipped')

        ,user_seeks

        ,user_scans

        ,user_lookups

FROM   sys.dm_db_index_usage_stats s

JOIN sys.objects O ON  s.[object_id] = O.[object_id]

JOIN sys.indexes i ON  s.[object_id] = i.[object_id]

    AND s.index_id = i.index_id

WHERE   OBJECTPROPERTY(s.[object_id], 'IsMsShipped') = 0

    -- Only ISV defined.

    AND user_seeks = 0

    AND user_scans = 0

    AND user_lookups = 0

    AND i.name IS NOT NULL -- Ignore HEAP indexes.

ORDER BY user_updates DESC

 

 

Lastly:

Remember you need to keep repeating this process as an ongoing activity as your applications emerge and you DB structure changes.