Saturday, September 29, 2012

Set build order for projects in a solution

From the properties of the solution select depedencies and for each project in your solution you specify the dependent projects. Once you specify the dependencies VisualStudio would set the build order automatically for you.

Tuesday, September 25, 2012

The conversion of a varchar data type to a smalldatetime data type resulted in an out-of-range value.


You get this error because you are trying to query with a date not supported by SQL Servers SmallDateTime datatype.

Note that SmallDateTime only allows dates between

January 1, 1900, through June 6, 2079

When you required dates beyond this range to be stored off in the DataBase specify the Datatype for the field in the table as DateTime instead of SmallDatetime


 

System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.

You get this exception when the datetime in your where clause is out of the range specified in error.
This is because SqlServer only stores the dates between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59  as the error specifies

Sunday, September 23, 2012

SSIS package hangs

I got through a seemingly strange issue in production some time back. Its that a Sql job scheduled to run daily on the prod server hung every now and then. The SSIS package never completed execution even days after when it hung. So this had to be carefully monitored and restarted whenever it got hung. Looking at SSIS logs it shows only validation message and no execution at all.
After a lot of research and working with the DBA, it turned out to be that the package tries to truncate and reload some tables. There was also a report which was trying to fetch data from the same table. The package hangs if the report executes the same time that the package started, since the report locks out the table that the package tries to truncate which never happens and enters a deadlock.

Saturday, September 22, 2012

Querying from an xml column Sql server


For instance you want to query the Names of all customers from  a customers column which is an xml, you need to provide the xpath of the particular element in xml from the xml in query()

 select CustomerName =<XmlColumn>.query('data(//Name)')

   from <Customer>

(400) Bad Request with WCF Data Services


The remote server returned an error: (400) Bad Request.

Getting this error when you call the SaveChanges() of your Entity context through your WCF Data Service?

The remote server returned an error: (400) Bad Request.

InnerException is null. Response object's Statuscode: System.Net.HttpStatusCode.BadRequest

No further information available in exception object.

 

Some context on the exception:

I faced this error because of the insufficient request size in my service config file. After a log of debugging and trail and error, I found that the huge xml contained is one of the entities is actually causing the issue. Further found that the SaveChanges succeeds when the xml size is small.Thus able to determine the issue was with request size

 

Resolution:

Increase the request size in the config file of the project where your data service recides and this should resolve the issue

<binding name="DataServiceBinding" ..... maxBufferSize="2147483647" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" .......>

Create missing indexes to improve your Database performance


How you create missing indexes to improve your Database performance

 

Step-1:

Query to identify the indexes which are missing. SqlServer provides you with DMVs(DynamicManagementViews) which help you get performance statistics of your DB.

Execute following query:

SELECT user_seeks * avg_total_user_cost * ( avg_user_impact * 0.01 )

 AS [index_advantage] ,

 migs.last_user_seek ,

 mid.[statement] AS [Database.Schema.Table] ,

 mid.equality_columns ,

 mid.inequality_columns ,

 mid.included_columns ,

 migs.unique_compiles ,

 migs.user_seeks ,

 migs.avg_total_user_cost ,

 migs.avg_user_impact

FROM sys.dm_db_missing_index_group_stats AS migs WITH ( NOLOCK )

 INNER JOIN sys.dm_db_missing_index_groups AS mig WITH ( NOLOCK )

 ON migs.group_handle = mig.index_group_handle

 INNER JOIN sys.dm_db_missing_index_details AS mid WITH ( NOLOCK )

 ON mig.index_handle = mid.index_handle

WHERE mid.database_id = DB_ID()

ORDER BY index_advantage DESC ;

 

Step-2:

Analyse which indexes you need to create

Identify which storeprocs and/or applications are querying from the tables that these indexes should be created on. You need to carefully judge and prioritize which indexes returned from the previous query would actually help you. One thing you need to keep in mind is the read/write ratio. If you have multiple applications one just reading from a table and other that writes much more as compared to the number of times the first one reads, it might be a better decision to not create the index or create the index dynamically before  the first application  is using the table and drop it after it completes the required queries from the table

[If you are working with a database where you are not sure what/how many applications are using the tables which the indexes are referring to, you might want to try this trick. Create all indexes in the desc order of priority from the results of the above query suggests. You may just create  indexes until the index advantage that you think will help improve DB performance. Now execute all the unit tests from all the applications. Now you can safely remove the ones which do not have a good read/write ratio. To ensure that you have retained the right indexes you can capture a trace from prod and execute it on test or execute it from DatabaseTuningAdvisor for better analysis]

Remember too many indexes might decrease than improve the performance of DB

 

Step-3:

Create the indexes from the missing indexes results as per the below format picking the ones that you want and ignoring the ones that you do not require.

 CREATE NONCLUSTERED INDEX  <IX_<TableName>_<columnnames>>

     ON <TableName> (<EqualityColumns>,<InequalityColumns>)

     INCLUDE (<IncludedColumns>)

 

Step-4:

Observe the performance of created indexes. Drop any redundant/unuseful indexes that you created.

After a period of time that you think all the regular load from applications had hit the database, get the statistics for the indexes you created.

Kindly note that to trust the results of the below queries your DB server  should have been restarted long back and you server should have received appropriate load, as these DMVs get you only the data from last restart of the server

 Execute following queries with the indexes you created

 

declare @dbid int

select @dbid = db_id()

 

select objectname=object_name(s.object_id), indexname=i.name, i.index_id

            , reads=range_scan_count + singleton_lookup_count

            , 'leaf_writes'=leaf_insert_count+leaf_update_count+ leaf_delete_count

            , 'leaf_page_splits' = leaf_allocation_count

            , 'nonleaf_writes'=nonleaf_insert_count + nonleaf_update_count + nonleaf_delete_count

            , 'nonleaf_page_splits' = nonleaf_allocation_count

from sys.dm_db_index_operational_stats (@dbid,NULL,NULL,NULL) s,

                        sys.indexes i

where objectproperty(s.object_id,'IsUserTable') = 1

and i.object_id = s.object_id

and i.index_id = s.index_id

and i.name in (<Index names you created>)

order by reads desc, leaf_writes, nonleaf_writes

order by indexname

 

 

select objectname=object_name(s.object_id), indexname=i.name, i.index_id

                        ,reads=user_seeks + user_scans + user_lookups

                        ,writes =  user_updates

from sys.dm_db_index_usage_stats s,

            sys.indexes i

where objectproperty(s.object_id,'IsUserTable') = 1

and s.object_id = i.object_id

and i.index_id = s.index_id

and s.database_id = @dbid

and i.name in (<Index names you created>)

order by reads desc

order by indexname

go

 

Step-5:

Compare the results of the two queries  above to verify that for all the create indexes reads are  more as  compared to the writes/updates. For most of the indexes the above two queries return a similar ratio of read/write. For very few that you find conflicting results relook into the indexes for which you have conflicting results. Make changes to those indexes as per your knowledge of the tables  and its columns  and try reanalyse these indexes.

 

Step-6:

Drop the indexes that you find the statistics DMVs show higher updates as compared to reads.

 

Step-7:

When you add indexes that are required it would be a better idea to look into the indexes which are no more being used and remove them. Following query gives statistics of indexes which are poorly performing. Now delete the indexes but make sure you have done the required analysis before deleting them. For instance, there might be a service/job/report scheduled to run once everymonth and your server might have restarting just two weeks back, in case which the following query would not show the indexes that the service/job/report uses.

SELECT TableName = OBJECT_NAME(s.[object_id]),

       SchemaName=SCHEMA_NAME(o.[schema_id])

        ,IndexName = i.name

        ,user_updates   

        ,i.is_primary_key

        ,OBJECTPROPERTY(s.[object_id], 'IsMsShipped')

        ,user_seeks

        ,user_scans

        ,user_lookups

FROM   sys.dm_db_index_usage_stats s

JOIN sys.objects O ON  s.[object_id] = O.[object_id]

JOIN sys.indexes i ON  s.[object_id] = i.[object_id]

    AND s.index_id = i.index_id

WHERE   OBJECTPROPERTY(s.[object_id], 'IsMsShipped') = 0

    -- Only ISV defined.

    AND user_seeks = 0

    AND user_scans = 0

    AND user_lookups = 0

    AND i.name IS NOT NULL -- Ignore HEAP indexes.

ORDER BY user_updates DESC

 

 

Lastly:

Remember you need to keep repeating this process as an ongoing activity as your applications emerge and you DB structure changes.