Microsoft SQL Server 2016 (SP2-GDR) (KB4293802) – 13.0.5081.1 (X64) Developer Edition (64-bit)

SQL Server was crashing. It happened a few times, seemingly at random. The server was a dev box so I just restarted it and ignored the errors as they consisted as an error dump. However, it happened again on a day where I had a bit of spare time so looked into it.

The main error message was:
Error: 25725, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
Looking up that error message I got: Error: 25725, Severity: 16, An error occurred while trying to flush all running Extended Event sessions. Some events may be lost.
Then I remembered I had an Extended Event Session that I’d left running to check Tempdb contention. I didn’t need to run it anymore so I turned it off.

Other error messages which came with this one were mainly related to the fact the service was stopping:

  • Error: 19032, Severity: 10, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
  • Error: 19032 = SQL Trace was stopped due to server shutdown. Trace ID = ‘%d’. This is an informational message only; no user action is required.
  • Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
  • Error 17300 = SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option ‘user connections’ to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes.
  • Error: 17189, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
  • Error 17189 = SQL Server failed with error code 0x%x to spawn a thread to process a new login or connection. Check the SQL Server error log and the Windows event logs for information about possible related problems.%.*ls
  • Error: 28709, Severity: 16, State: 19. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
  • Error 28709 = Dispatcher was unable to create new thread.

Before those simple messages came a series of Memory Broker messages, Big Gateway, Medium Gateway and Small Gateways as part of a general dump.

You can get a list of error messages by running this script:
SELECT *
FROM master.dbo.sysmessages

A useful list of error messages can be found here: System Error Messages

The Problem

When checking the error logs on a SQL 2016 box I noticed this error message: Trace flag 1117 is discontinued. Use the options provided with ALTER DATABASE.
Trace flags T1117 and T1118 have been discontinued as of SQL Server 20016.

T1118 tells SQL to avoid mixed extents and use full extents.
T1117 makes sure all files in a filegroup grow at an even rate.

I now need to replicate this behavior on the SQL 2016 box

The Solution

T1118 doesn’t matter so much to me. The database where it has the most effect is tempdb and this now uses full extents by default.

T1117 has been replaced with an Alter Database Command:
ALTER DATABASE Adventureworks2016 MODIFY FILEGROUP [PRIMARY] AUTOGROW_ALL_FILES
GO
ALTER DATABASE Adventureworks2016 MODIFY FILEGROUP [PRIMARY] AUTOGROW_SINGLE_FILE
GO

Unfortunately you can’t apply this to the model db. So, to apply this to all current databases I wrote a script to loop through and check the status of the filegroup and change it if needed.

Please test this script before you run it. You might want to change the two Exec command with Print and so check the output.

Note: You can check the status of a database by running this script :
SELECT * from sys.filegroups
You will see there is a column is_autogrow_all_files. This should be set to 1

This script will not work if the database is in use

The Script

SELECT [name] as DBName, cast(0 as bit) as Checked
into #DBTest
FROM sys.Databases as db
where database_id > 4
and is_read_only = 0;

DECLARE @DBName as Varchar(250);
DECLARE @sql as Varchar(max);

While Exists (select 1 FROM #DBTest where Checked = 0)
BEGIN
SELECT top 1 @DBName = DBName
FROM #DBTest
where Checked = 0;

SET @sql = ‘USE ‘ + @DBName + ‘
DECLARE @sqlinner as Varchar(max);
DECLARE @fg as Varchar(250);
SELECT ”ALTER DATABASE ‘ + @DBName + ‘ MODIFY FILEGROUP [” + name + ”] AUTOGROW_ALL_FILES ” as sl , cast(0 as bit) as Checked, name as FG
INTO #FileGroups
from sys.filegroups
WHERE is_autogrow_all_files = 0;
While Exists (select 1 FROM #FileGroups where Checked = 0)
BEGIN
SELECT top 1 @sqlinner = sl, @fg = FG
FROM #FileGroups where Checked = 0;

EXEC(@sqlinner);

UPDATE #FileGroups
SET Checked = 1
WHERE FG = @fg;

END
DROP TABLE #FileGroups

exec(@sql);

UPDATE #DBTest
SET Checked = 1
WHERE DBName = @DBName;
END

DROP TABLE #DBTest

I use this script all the time when setting up an SSIS package. (Unfortunately, I can’t remember where I found the original code. I’ve adapted it slightly, so if anyone recognises the original then let me know and I’ll link to it.)

The Problem

When setting up a data flow in SSIS the data transfer speed can be very slow because the default settings in the package have not been optimised.

The Solution

SSIS Properties

The code below will show you each table in the database. I take the column MaxBufferSize and round it down to the nearest hundred – so 87235 becomes 87000. I use this value as the DefaultBufferMaxRows value. I change the DefaultBufferSize from 10485760 to 104857600 (same number but add a zero to the end). Finally, I’ll add values to the BlobTempStoragePath and BufferTempStoragePath, normally I’ll use C:\temp, but make sure the directory exists and you’re probably better choosing a value not on the C drive.

SELECT s.[name] + '.' + t.[name] as TableName, SUM (max_length) as [row_length], 10485760/ SUM (max_length) as MaxBufferSize
FROM sys.tables as t
JOIN sys.columns as c
ON t.object_id=c.object_id
JOIN sys.schemas s
ON t.schema_id=s.schema_id
GROUP BY s.[name], t.[name];

These changes will allow SSIS to load more rows simultaneously and so should speed up your loading. I tend to use OLD connection for Source and Destination.

It’s been a long time since I’ve posted on here, mainly because I no longer work primarily as a DBA but more as a SQL Developer, I was also looking after thousands of instances so came across a lot of issues. However, this one had me stuck for a few days. We have a developer who wanted to run ‘R’ and a 2016 instance which should have let him, but no dice.

The Problem

When running the following ‘R’ test script:

EXEC sp_execute_external_script
@language =N’R’,
@script=N’OutputDataSet<-InputDataSet',
@input_data_1 =N’SELECT 1 AS hello'<
WITH RESULT SETS (([hello world] int not null));
GO

I got the following error:
Msg 39021, Level 16, State 1, Line 1 Unable to launch runtime for ‘R’ script.
Please check the configuration of the ‘R’ runtime. Msg 39019, Level 16, State 1, Line 1 An external script error occurred: Unable to launch the runtime. ErrorCode 0x80070490: 1168(Element not found.).

The Solution

I was running on SQL Server 2016 (13.0.4001.0) with no previous ‘R’ installations or CTP instalattions.
The solution was to uninstall and install the dll.
In my case the path to it was C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\R_SERVICES\library\RevoScaleR\rxLibs\x64\RegisterRExt

So, first I opened up dos with admin rights and ran:
“C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\R_SERVICES\library\RevoScaleR\rxLibs\x64\RegisterRExt” /uninstall

After that I ran
“C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\R_SERVICES\library\RevoScaleR\rxLibs\x64\RegisterRExt” /install

Note: Each time you run the uninstall or install script it will stop and start your SQL instance

This was for a default instance. I think, if you are using a named instance you need to add: /instance:InstanceName after the /install flag

 

Adding extra files to Tempdb

Posted: November 3, 2015 in DBA, Design, Files, Tempdb

This isn’t actually a problem. When I build a new instance I like a tempdb file for each cpu. This script will create those files in the directory where the current tempdb data file is. Please note that it sizes tempdb data and log files too, you may want to change those values

/* Script Starts*/
DECLARE @cpuCount as int;
DECLARE @Files as int;

SELECT @Files = COUNT(*)
FROM tempdb.sys.database_files
WHERE type_desc = ‘ROWS’;

SELECT @cpuCount = cpu_count /hyperthread_ratio
FROM sys.dm_os_sys_info
–Print @cpuCount
–Print @Files

Alter Database Tempdb modify file(Name=tempdev, Size=300, filegrowth = 150MB);
Alter Database Tempdb modify file(Name=templog, Size=50, filegrowth = 50MB);

DECLARE @FileLocation as Varchar(750);

–You may need to check that the name of the tempdb data file is tempdb.mdf (select * from tempdb.sys.master_files)
Set @FileLocation = (SELECT SUBSTRING(physical_name, 1,
CHARINDEX(N’tempdb.mdf’,
LOWER(physical_name)) – 1) DataFileLocation
FROM master.sys.master_files
WHERE database_id = 2 AND FILE_ID = 1)
–Print @FileLocation

DECLARE @diff as int;
SET @diff = @cpuCount – @Files
If @diff > 7
set @diff = 8 – @Files

DECLARE @x as TinyInt;
SET @x = @Files;
DECLARE @file as Varchar(10);
DECLARE @fileName as Varchar(250);
While @diff > 0
BEGIN
SET @file = ‘tempdev’ + Cast(@x as varchar(2))
SET @fileName = @FileLocation + ‘\’ + @file + ‘.ndf’;
DECLARE @sql as Varchar (8000);
SET @sql = ‘ALTER DATABASE TempDb ‘;
SET @sql = @sql + ‘ADD FILE ‘;
SET @sql = @sql + ‘( ‘;
SET @sql = @sql + ‘NAME = ‘ + @file + ‘, ‘;
SET @sql = @sql + ‘FILENAME = ”’ + @fileName + ”’, ‘;
SET @sql = @sql + ‘SIZE = 300MB, ‘;
SET @sql = @sql + ‘FILEGROWTH = 150MB’;
SET @sql = @sql + ‘);’;
Exec (@sql);
SET @diff = @diff -1;
SET @x = @x + 1;
END;

The Problem

I had a list of different routes and stops along the route (for the benefit of this example they are called seq). What I wanted to do was get the start time and the end time for each journey and so work out the journey times.
Each journey – here called route – may have different start and stop points (seq values)

The Solution

In the real case I found the min and max seq for each journey and added these as columns to my base table. This was all done in SQL 2014

CREATE TABLE #Journey(
[JourneyID] [int] NOT NULL,
[Route] [int] NOT NULL,
[seq] [int] NOT NULL,
[tme] [smalldatetime] NOT NULL,
[maxSeq] [int] NOT NULL,
[minSeq] [int] NOT NULL);

INSERT INTO #Journey
Values(1056975,20,2,’2015-06-03 09:34:00′,99,2),
(1056975,20,5,’2015-06-03 09:38:00′,99,2),
(1056975,20,6,’2015-06-03 09:39:00′,99,2),
(1056975,20,99,’2015-06-03 09:44:00′,99,2),
(1056975,20,99,’2015-06-03 09:45:00′,99,2),
(1056975,20,99,’2015-06-03 09:49:00′,99,2),
(1056975,20,99,’2015-06-03 09:53:00′,99,2),
(1056975,20,99,’2015-06-03 09:56:00′,99,2),
(1056975,20,99,’2015-06-03 09:57:00′,99,2),
(2471362,1,1,’2015-06-06 07:48:00′,99,1),
(2471362,1,1,’2015-06-06 07:49:00′,99,1),
(2471362,1,2,’2015-06-06 07:56:00′,99,1),
(2471362,1,5,’2015-06-06 07:57:00′,99,1),
(2471362,1,5,’2015-06-06 07:59:00′,99,1),
(2471362,1,8,’2015-06-06 08:05:00′,99,1),
(2471362,1,11,’2015-06-06 08:08:00′,99,1),
(2471362,1,14,’2015-06-06 08:15:00′,99,1),
(2471362,1,15,’2015-06-06 08:21:00′,99,1),
(2471362,1,99,’2015-06-06 08:23:00′,99,1)

;WITH journeyStart(JourneyID, [Route], seq, tme, RN)
as (
SELECT JourneyID, [Route], seq, tme, RN=row_number()
OVER (PARTITION BY JourneyID ORDER BY seq desc)
FROM #Journey WHERE seq = [minSeq])

SELECT JS.[Route], JS.seq as TrainSequenceStart, JS.tme as StartTime,

RJ2.seq as TrainSequenceEnd, RJ2.tme as EndTime, JT.JourneyTime as
[JourneyTime (minutes)]
FROM journeyStart as JS

CROSS APPLY (select top (1) RJe.*
FROM #Journey as RJe
where RJe.JourneyID = JS.JourneyID
AND RJe.[Route] = JS.[Route]
AND RJe.Seq = RJe.maxSeq
order by RJe.JourneyID, RJe.seq) as RJ2

CROSS APPLY (SELECT DateDiff(mi,JS.tme,RJ2.tme) as journeyTime) as JT

WHERE JS.RN = 1;

DROP TABLE #Journey;

The important point it in the 1st cross apply, RJe.Seq = RJe.maxSeq, which just says, get me the max seq value.
I added the second cross apply for the date calculation just to make the code easier to read.

Using the Apply operator

Posted: October 28, 2015 in Apply, SQL Bits, SQL Video, TSQL

I started this blog to remind myself how I solved certain problems or to dump code snippets. I haven’t been using it much but am going to get it going again. From now on I’m also going to include links to interesting SQL resources (mainly videos) I watch.

To start this off, I watched this one today: Boost your T-SQL with the Apply Operator. Certainly worth a watch if you are doing any T-SQL coding. It’s presented by Itzik Ben-Gan who also wrote the book T-SQL Querying, which I happen to be reading at the moment.

The Problem

I had a table which represented clusters of an incident. I then wanted 1 row for every instance of that incidence. So,

DayName InstanceCount
Day 1 2
Day2 3
Day3 1

Would become:

DayName
Day1
Day1
Day2
Day2
Day2
Day3

The Solution

At first I used a cursor and entered each row into a new table.
I wasn’t happy with the cursor so looked into other answers and came up with this:

DECLARE @ii as int;
DECLARE @i as int = 0;
SELECT @ii = max(InstanceCount) + 10
FROM [dbo].[SourceTable];

DECLARE @NumRows as Table (num int not null);
WHILE @i < @ii
BEGIN
INSERT INTO @NumRows(num)
VALUES (@i);
SET @i = @i + 1;
END;

SELECT J.DayName
FROM [dbo].[SourceTable] as J
JOIN @NumRows as n
ON n.num < CONVERT(int, LEFT(CAST(InstanceCount AS INT ),20));

First get the highest instance count and add a few for luck. Then join this numbers table to your base table. This got the results I wanted. (I still had to use a cursor because I needed to randomise an element in each row, but that’s another story).

I watched a video by Steve Stedman (http://stevestedman.com/) and he gave a demo of a numbers table using a CTE. So I changed my code to this:

DECLARE @ii as int;
SELECT @ii = max(InstanceCount) + 10
FROM [dbo].[SourceTable];

;WITH numbers(num) as
(
SELECT 1
UNION ALL
SELECT 1 + num
FROM numbers
WHERE num < @ii
)
SELECT J.DayName
FROM [dbo].[SourceTable] as J
JOIN numbers as n
ON n.num < CONVERT(int, LEFT(CAST(InstanceCount AS INT ),20));
OPTION (MAXRECURSION 0);

Annoyingly, MAXRECURSION wouldn’t accept a variable as a value else I would have used @ii

The Problem

The server was on a VM and the system drive was backed up. We had separate drives for SQL Data, SQL Logs and SQL Backups. Only the backups went to tape.
As part of the decommissioning of a server we lost all the drives on our production box.
They were replaced and the system disk restored. The SQL Backups were also restored. The data and log drives were empty.

The Solution

The restore of the system drive gave us the SQL binaries but we lacked the system databases. In our support database I had a record of where the data and log files were stored so I could recreate the directory structures. I also used the SQL Config manager to find out where the error logs went and created those directories too.
SQL wouldn’t start, however, as there was no master db. To get it going I found another instance of SQL which was at the same product level. I copied the master mdf and ldf into the appropriate drives (I had to stop this SQL instance to get these files as they can’t be copied while SQL is running). I was now able to get SQL to start for a second. It was filing now due to a lack of Tempdb.
At first I was a bit confused as Tempdb is created when SQL server starts. Then I remembered that it was created from model. I went back to that other SQL install and took a copy of model mdf and ldf.
I was now able to Start SQL up and attempt to do a restore of the local master db.
At this point I encountered another issue. To restore master you need SQL to be running in single user mode. That’s fine but there was an app which was connecting to this box every half-second and it kept stealing that single session.
To stop the app I used IPSec (http://searchwindowsserver.techtarget.com/feature/Use-IPSec-to-manage-connections) to block the ip address the app was using – I got this from the SQL error logs. Thinking about it now I might have been able to turn of TCIP and used shared memory.
With the app blocked I was able to restore maters and then MSDB. I didn’t do Model as the copy I had borrowed was from an identically set-up SQL. You may want to restore model to be on the safe side.
At this point I could start SQL up normally and from SSMS I did a restore of all the user databases.

Lessons

We are now creating an archive of Master and model mdf and ldf files. We can then use these without stopping another SQL instance.
You could also copy master and model files into the backup directory whenever you do a new build, apply a service pack or create a hot fix (ie whenever the version number changes). You could then pull these from the backup directory.

The Problem

When I tried to deploy our SQL Maintenance plans to a server nothing happened.
I then checked the server and go the error: The affinity mask specified does not match the CPU mask on this system.

I ran sp_configure and saw that advanced options were not turned on. I wanted to turn them on so I could set the affinity mask properly. When I tried to do this I got the error: The affinity mask specified does not match the CPU mask on this system. I ran sp_configure again and I saw that the configured value was set to 1 but the run value was set to 0. This happened even I fI used Reconfigure with Override.
I restarted the instance to see if that would help. It didn’t.
I tried to change the affinity mask through Properties on the Instance in SSMS. It was all greyed out so I was unable to change anything.

Solution

I won’t detail all the things I tried which didn’t work, but there were many.
I changed the server start-up parameters to include –f. This starts the server in single user mode with limited configuration.
I also stopped the SQL Agent Servers as I didn’t want it taking my single connection.
There was an application which connects as soon as SQL starts. This was local so I couldn’t disable TCP/ip or use ipSec to stop it getting in. The solution was simpler – I disabled the account as it was using a SQL Login.
I then restarted SQL.
In SSMS I created a DAC connection to the box by going to File > New Query > then connected as
ADMIN:Servername
I ran sp_configure and could see that affinity mask had a configured value of 2 but a run value of 0 – due to the limited configuration start up. I ran sp_confiure and set it to 0. I did the same with the I/O affinity mask.
I then removed the –f start up parameter and restarted SQL. I re-enabled the SQL Login I’d disabled and started up the SQL Agent.
All was now fine.

Why did it happen?

This was a VM with only one CPU. I assume the affinity settings were put in place on a physical box. The machine was then virtualised with one CPU and the problems started.