SQL Server 2014 – Can Filling Factor Reduce More Pages?

There has been a problem with page parts in a table that is a particular annoyance: it is an audit log of the activities in the database that is over 1TB in size. The main indices refer to record type NVARCHAR (100), as this is far more useful for five record types than a tinyint and record ID, which is instead a NVARCHAR (200) of the record's integer key.

They also cover indices, where the key, the old value, the new value, etc. are very wide.

This is an old system, and unfortunately the code for this check is not centralized in one procedure, but everywhere. It can not be changed and we will go through the painful process of a long rewrite of microservices.

So I reduced the fill factor for two of the indices from 100% to 85%.

And the side parts have gotten worse. I would say about 3x more side panels.

Is that a common result? Most recommendations suggest that the fill factor should be reduced to improve page splitting performance. I can understand why this is possible because the data in the keys are so wide.

Would it be advisable to reduce the fill factor further or reset it to the original value?

Import small decimal places from CSV file (SQL Server import task)

I try to import data from a CSV file into my database (using the Task Import Wizard), but always get a data loss error ("The value could not be converted due to possible data loss"). The error is from a decimal column in the source file with the value of -1.99999999495049E-05 & # 39 ;.

The target column has the type decimal (38,6) and when I fill the file with values ​​like & # 39; 12 .58462 & # 39 ;, it works really well and imports the file without error. Here is a preview of the CSV file:

Enter image description here

I tried to force ignoring in the event of an error, but only null values ​​are retrieved

Enter image description here

When I try to import it with string type, it works and then I can convert. But I'm dealing with huge files, and to avoid wasting time, I'd like to know if there's any way to force the conversion into

select cast('-1.99999999495049E-05' as real)

Many thanks

SQL Server 2012 – Performance Optimization

The short answer is that both are equally bad. Nobody is SARGable. Breaking columns in functions or applying optional parameters can degrade performance, prevent efficient use of indexes, and so on.

The benefit of the second query is that you can apply a RECOMPILE hint to work around some performance issues:

SELECT Col1, Col2, ... 
FROM dbo.Table1 
WHERE (@Var1 IS NULL OR Col1 = @Var1)
OPTION(RECOMPILE);

However, if the actual query is more complex and has a larger execution plan, or if it is executed frequently (think hundreds or thousands of times per minute), this can lead to more overhead in creating the plan.

The typical solution is the use of dynamic SQL:

DECLARE @Var1 NVARCHAR(15) = NULL; -- This variable can have NULL or specific value
DECLARE @SQL NVARCHAR(MAX) = N''

SET @SQL = @SQL + N'SELECT Col1, Col2, ... FROM dbo.Table1 WHERE 1 = 1' 
IF @Var1 IS NOT NULL
    BEGIN
        SET @SQL += ' AND Col1 = @iVar1'
    END;

EXEC sys.sp_executesql @SQL, N'@iVar1 NVARCHAR(15)', @iVar1 = @Var1;

This is certainly written to prevent SQL injection.

What's faster in Windows service and SQL Server jobs and why?

I'm working on an application and we've created a stored procedure that updates and selects some data. We did not use the transaction in SP.
Now we call this from Windows services and we are running 20 threads for this service.
I've found that these queries run for some time in a suspended state. These queries take only 4 to 5 minutes.
Can someone help me, if I should use SQL Server Jobs or Windows services are ok and why?

Many thanks,

postgresql – How can I use SQL window functions to determine the lead of the last row in a partition?

Considering the following simple example:

select lead(c) over (partition by a order by a)
from (values 
      (1,'1','date1'),
      (1,'2','date1'),
      (2,null,'date2')
) t(a,b,c)

The first two lines are records with the same ID (column a) and "date1" (column c). ID duplication is due to an improper operation in an array column that may have a different size or number of elements. The most unnecessary operation leads to column b.

Now I want to get the lead date of the next ID "date2". The query I wrote prints "date 1" for the first record, but I really want to get "date 2". I'm not interested in the static solution to this problem, which is a guide (c, 2) (order through a), but how to get the guidance of a window of dynamic size.

Expected issue:

date2
date2
NULL

SQL Server – incremental backup between now and backup one week ago

At a certain point in time, you can not go back in time and create an incremental backup. If the database is in full recovery mode, you can give them the transaction log backups, if any, that range from the restored backup to the time that you want to restore it.

However, if you have used the database, you must restore the backup again and leave it in recovery mode so that the transaction logs can be restored. Once a backup is restored and the database is taken out of recovery mode to be used, no further backups (differential, incremental or log backups) can be restored to it.

You can also check what the developer really wants, as I suspect he used the wrong terminology in the request. With reference to the conversion, etc., it appears that a backup is expected to restore only the new records from that period. Whichever backup you give them, they will retrieve the entire database (unless you just restore partitions, but based on the question I suspect, this is not a really good option).

SQL Server – Query / Monitor Index Fragmentation

I've been DBA on my first job outside of college for 3 months and have significant difficulty figuring out how index fragmentation on 2 of our production servers (2 separate clusters, but none) can be accurately queried and reported, which is not for this specific question important … just pretend I have 1 server).

We already use Ola Hallengren's scripts for index maintenance, but I need to query all our databases on our server and retrieve the% fragmentation along with the table name, index name, and database name.

Here is my current query that does not seem to be correct:

SELECT s.(name) +'.'+t.(name)  AS table_name  
 ,i.NAME AS index_name  
 ,dbd.name AS (database_name)  
 ,index_type_desc  
 ,ROUND(avg_fragmentation_in_percent,2) AS avg_fragmentation_in_percent  
 ,page_count  
 FROM sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, 'LIMITED') ips  
INNER JOIN sys.tables t on t.(object_id) = ips.(object_id)  
INNER JOIN sys.schemas s on t.(schema_id) = s.(schema_id)  
INNER JOIN sys.indexes i ON (ips.object_id = i.object_id) AND (ips.index_id =   i.index_id)  
INNER JOIN sys.databases AS dbd on dbd.database_id = ips.database_id  
WHERE i.type IN (1,2) -- Include ONLY Clustered & Non-Clustered Indexes  
AND s.name <> 'Audit' -- Exclude Audit Schemas  
AND ips.page_count > 100 --Exclude small page counts  
AND ips.alloc_unit_type_desc = 'IN_ROW_DATA'   
AND ips.index_level = 0 -- Current level of the index. 0 for index leaf  
levels... Greater than 0 for nonleaf index levels.

Even if I remove the WHERE clauses, my data still seems to be wrong. One of my records / tuples shows a table in a database with a 1700-page index, but if I manually go to that index, does it have only 1? I worked for hours on it and can not seem to figure out what's going on. I have read the entire documentation for this DMV.

ssms 18 – SQL Server Management Studio – Why is the dark theme disabled by default?

Why is the dark theme disabled by default? Are there some mistakes with this topic or something in that direction? It seems strange to me that it is purposely hampered by force.

I've found many links that I've turned on the & # 39; Dark & ​​# 39; topic for SSMS. However, I'm curious why this setting is disabled by default. Why is this a single line in the commented out? ssms.pkgundef File?

Here is one of many links that show how to expose the disabled dark theme.
How to enable Dark Them in SSMS

Basically, I'm curious if there is a consequence that I do not know when I turn this feature back on. I can not find any links that indicate why they are disabled at all.

Screenshot of the dark theme after changing from enabled ssms.pkgundef

Enter image description here