Sunday, August 18, 2013

Dead Locks

Dead Locks
One thing that will you most certainly face at some time as a DBA is dealing with deadlocks. A deadlock occurs when two processes are trying to update the same record or set of records, but the processing is done in a different order and therefore SQL Server selects one of the processes as a deadlock victim and rolls back the statements.
You have two sessions that are updating the same data, session 1 starts a transaction updates table A and then session 2 starts a transaction and updates table B and then updates the same records in table A.
Session 1 then tries to update the same records in table B. At this point it is impossible for the transactions to be committed, because the data was updated in a different order and SQL Server selects one of the processes as a deadlock victim.
To further illustrate how deadlocks work you can run the following code in the Northwind database.


To create a deadlock you can issue commands similar to the commands below.

Step
Commands

1
--open a query window (1) and run these commands
begin tran
update products set supplierid = 2

2
-- open another query window (2) and run these commands
begin tran
update employees set firstname = 'Bob'
update products set supplierid = 1

3
-- go back to query window (1) and run these commands
update employees set firstname = 'Greg'
At this point SQL Server will select one of the process as a deadlock victim and roll back the statement

4
--issue this command in query window (1) to undo all of the changes
Rollback

5
--go back to query window (2) and run these commands to undo changes
Rollback

Steps to capture dead lock information into error log file
1. Enable trace flag
DBCC TRACEON (3605,1204,-1)
2. Create the event alert for the error number 1205 so that it should send response to required operator.
3. Capturing deadlocks with Profiler
a. Start à Run à Profiler
b. Go to File menu à New Trace
c. Select Server Name
d. Click on Options
e. Connect to a database = Browse Server
f. Yes
g. Select Northwind (required database)
h. OK
i. Connect
j. Enter Trace Name: Northwind_DeadLocks_Trace
k. Use the Template : Tuning
l. Select checkbox – Save to File à Save
m. Select “Events Selection” tab
n. Select checkbox – Show all events
o. Under Locks node select DeadLock graph and DeadLock chain
p. Run
q. Go to SSMS à Run the above queries


--open a query window (1) and run these commands
begin tran
update products set supplierid = 2

-- open another query window (2) and run these commands
begin tran
update employees set firstname = 'Bob'
update products set supplierid = 1

-- go back to query window (1) and run these commands
update employees set firstname = 'Greg'
At this point SQL Server will select one of the process as a deadlock victim and roll back the statement

r. Stop trace in Profiler
s. Under Event Class click on Dead Lock graph
Resources That Can Deadlock


Each user session might have one or more tasks running on its behalf where each task might acquire or wait to acquire a variety of resources. The following types of resources can cause blocking that could result in a deadlock.
Locks. Waiting to acquire locks on resources, such as objects, pages, rows, metadata, and applications can cause deadlock. For example, transaction T1 has a shared (S) lock on row r1 and is waiting to get an exclusive (X) lock on r2. Transaction T2 has a shared (S) lock on r2 and is waiting to get an exclusive (X) lock on row r1. This results in a lock cycle in which T1 and T2 wait for each other to release the locked resources.
Worker threads. A queued task waiting for an available worker thread can cause deadlock. If the queued task owns resources that are blocking all worker threads, a deadlock will result. For example, session S1 starts a transaction and acquires a shared (S) lock on row r1 and then goes to sleep. Active sessions running on all available worker threads are trying to acquire exclusive (X) locks on row r1. Because session S1 cannot acquire a worker thread, it cannot commit the transaction and release the lock on row r1. This results in a deadlock.
Memory. When concurrent requests are waiting for memory grants that cannot be satisfied with the available memory, a deadlock can occur. For example, two concurrent queries, Q1 and Q2, execute as user-defined functions that acquire 10MB and 20MB of memory respectively. If each query needs 30MB and the total available memory is 20MB, then Q1 and Q2 must wait for each other to release memory, and this results in a deadlock.
Parallel query execution-related resources Coordinator, producer, or consumer threads associated with an exchange port may block each other causing a deadlock usually when including at least one other process that is not a part of the parallel query. Also, when a parallel query starts execution, SQL Server determines the degree of parallelism, or the number of worker threads, based upon the current workload. If the system workload unexpectedly changes, for example, where new queries start running on the server or the system runs out of worker threads, then a deadlock could occur.
Multiple Active Result Sets (MARS) resources. These resources are used to control interleaving of multiple active requests under MARS
User resource. When a thread is waiting for a resource that is potentially controlled by a user application, the resource is considered to be an external or user resource and is treated like a lock.
Session mutex. The tasks running in one session are interleaved, meaning that only one task can run under the session at a given time. Before the task can run, it must have exclusive access to the session mutex.
Transaction mutex. All tasks running in one transaction are interleaved, meaning that only one task can run under the transaction at a given time. Before the task can run, it must have exclusive access to the transaction mutex.
Deadlock Detection
All of the resources listed in the section above participate in the Database Engine deadlock detection scheme. Deadlock detection is performed by a lock monitor thread that periodically initiates a search through all of the tasks in an instance of the Database Engine. The following points describe the search process:
The default interval is 5 seconds.
If the lock monitor thread finds deadlocks, the deadlock detection interval will drop from 5 seconds to as low as 100 milliseconds depending on the frequency of deadlocks.
If the lock monitor thread stops finding deadlocks, the Database Engine increases the intervals between searches to 5 seconds.
If a deadlock has just been detected, it is assumed that the next threads that must wait for a lock are entering the deadlock cycle. The first couple of lock waits after a deadlock has been detected will immediately trigger a deadlock search rather than wait for the next deadlock detection interval. For example, if the current interval is 5 seconds, and a deadlock was just detected, the next lock wait will kick off the deadlock detector immediately. If this lock wait is part of a deadlock, it will be detected right away rather than during next deadlock search.




To help minimize deadlocks:
Access objects in the same order.
Avoid user interaction in transactions.
Keep transactions short and in one batch.
Use a lower isolation level.
Use a row versioning-based isolation level.
Set READ_COMMITTED_SNAPSHOT database option ON to enable read-committed transactions to use row versioning.
Use snapshot isolation.
Use bound connections.
In stored procedures or programming we have to handle deadlocks using TRY/ CATCH blocks. In Catch block when error number 1205, write the logic to run the query again.

Fix Cannot open user default database. Login failed. Login failed for user SQL Server Error

Introduction

In this article we will take a look at the steps which you need to follow when you receive “Cannot open user default database. Login failed. Login failed for user ‘UserName’. (Microsoft SQL Server, Error: 4064)“ SQL Server Error message.

Error Message:

Cannot open user default database. Login failed.
Login failed for user ‘UserName’. (Microsoft SQL Server, Error: 4064).
Cannot open user default database. Login failed. Login failed for user ''. (Microsoft SQL Server, Error: 4064)
You may end up receiving this error when a Windows or SQL Server Login is not able to connect to the default database which is assigned to the user.
If you double click the user (Expand Security -> Logins) you would end up seeing no database being assigned to Default Database as shown in the snippet below.
SQL Server Login Properties Default Database Name is Missing

How to Fix “Cannot open user default database. Login failed. Login failed for user Error Message”

In the Connection Properties specify the database name as TempDB for Connect to Database and click the Connect button to connect to the SQL Server Instance.
In SSMS Connection Properties Specify Default Database as TempDB and Click Connect
Once you are connected to the SQL Server Instance execute the below TSQL to assign the login a new default database.
Use master
GO

ALTER LOGIN [MonitorDB] WITH DEFAULT_DATABASE = TempDB
GO


Read more: http://www.mytechmantra.com/LearnSQLServer/Fix-cannot-open-user-default-database-Login-failed-Login-failed-for-user-SQL-Server-Error/#ixzz3h4bqZ900
Follow us: @MyTechMantra on Twitter | MyTechMantra on Facebook

Error Log File

Introduction

SQL Server Error Log is the best place for a Database Administrators to look for informational messages, warnings, critical events, database recover information, auditing information, user generated messages etc. SQL Server creates a new error log file everytime SQL Server Database Engine is restarted. This article explains how to recycle SQL Server Error Log file without restarting SQL Server Service.
Database administrator can recycle SQL Server Error Log file without restarting SQL Server Service by running DBCC ERRORLOG command or by running SP_CYCLE_ERRORLOG system stored procedure.
Note:- Starting SQL Server 2008 R2 you can also limit the size of SQL Server Error Log file. For more information see Limit SQL Server Error Log File Size in SQL Server. However, to increase the number of error log file see the following article for more information How to Increase Number of SQL Server Error Log Files.

Recycle SQL Server ErrorLog File using DBCC ERRORLOG Command

Execute the below TSQL code in SQL Server 2012 and later versions to set the maximum file size of individual error log files to 10 MB. SQL Server will create a new file once the size of the current log file reaches 10 MB. This helps in reducing the file from growing enormously large.
USE [master];
GO

DBCC ERRORLOG
GO

Recycle SQL Server Error Log File using SP_CYCLE_ERRORLOG System Stored Procedure

Use [master];
GO

SP_CYCLE_ERRORLOG
GO
Best Practice: It is highly recommended to create an SQL Server Agent Job to recycle SQL Server Error Log once a day or at least once a week.

Conclusion

This article explains how to Recycle SQL Server Error Log file without restarting SQL Server Service.


Read more: http://www.mytechmantra.com/LearnSQLServer/SQL-Server-Recycle-Error-Log-Without-Restarting-Service-DBCC-ErrorLog-or-SP_CYCLE_ERRORLOG/#ixzz3h4gmJhC3
Follow us: @MyTechMantra on Twitter | MyTechMantra on Facebook



Limit SQL Server Error Log File Size in SQL Server 2012


Sept 12, 2014


Introduction

SQL Server Error Log contains valuable information which can be used by database administrators to troubleshoot issues with SQL Server. A typical Error Log file contain informational messages, warnings, critical events, database recover information, auditing information, user generated messages etc.
SQL Server Error Log file is initialized every time SQL Server Instance is started. Hence, if SQL Server is not restarted in a while then the error log file will grow very large and it can been extremely difficult to read the information stored within it during critical times. See the follow article to learn How to Recycle SQL Server Error Log file without restarting SQL Server Service.
This article explains how to limit the size of SQL Server Error Log file in SQL Server 2012 and later versions. As a best practice we highly recommend you to backup the registry settings before modifying it. See the follow article to learn How to Increase Number of SQL Server Error Log files in SQL Server.
NoteOne can also limit the size of SQL Server Error Log File on SQL Server 2008 R2 apart from SQL Server 2012 and later versions.

Limiting the size of SQL Server Error Log File in SQL Server 2012 and later versions

Execute the below TSQL code in SQL Server 2012 and later versions to set the maximum file size of individual error log files to 10 MB. SQL Server will create a new file once the size of the current log file reaches 10 MB. This helps in reducing the file from growing enormously large.
USE [master];
GO

EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE'
    ,N'Software\Microsoft\MSSQLServer\MSSQLServer'
    ,N'ErrorLogSizeInKb'
    ,REG_DWORD
    ,10240;
GO
The below screenshot refers to the registry key prior to the limiting the size of SQL Server Error Log file in Windows Server 2008 R2 for SQL Server 2012.
Registry Setting Before Limiting Size of SQL Server Error Log File Size in SQL Server 2012 
In the below screenshot you could see a new entry added with the name ErrorLogSizeInKb along with the value as 10240 KB.
ErrorLogSizeInKB Registry Setting After Limiting Size of SQL Server Error Log File Size in SQL Server 2012 
Since SQL Server Error Log hold critical information it is highly recommended to limit the size to a unit which prevents it from growing too large.

Conclusion

This article explains how to limit the size of SQL Server Error Log files in SQL Server 2012 and later versions


Read more: http://www.mytechmantra.com/LearnSQLServer/Limit-SQL-Server-Error-Log-File-Size-in-SQL-Server-2012/#ixzz3h4gx8lKb
Follow us: @MyTechMantra on Twitter | MyTechMantra on Facebook



Introduction

SQL Server Error Log is the best place for a Database Administrators to look for informational messages, warnings, critical events, database recover information, auditing information, user generated messages etc. By default, there are six archive error log files along with the current file named ERRORLOG. This article explains how to increase the number of SQL Server Error Logs from the default value of six.
Whenever SQL Server is restated, a new ERRORLOG file is created and the previous file is renamed as ERRORLOG.1, and the second most recent error log will be renamed as ERRORLOG.2 and the last error log will have a name similar to ERRORLOG.n. See the follow article to learn How to Recycle SQL Server Error Log file without restarting SQL Server Service.
If you are looking for option to limit the size of ErrorLog file then see the following article for more information How to Limit SQL Server Error Log File Size in SQL Server 2008 R2 and Later Versions.
Best Practice:: It is highly recommended to increase the number of SQL Server Error Log files from the default value of six to a minimum of 30.

How to Increase Number of SQL Server Error Log Files Using SSMS

Follow the below mentioned steps to increase the number of SQL Server Error Log files in SQL Server 2005 and later versions.
1. Open SQL Server Management Studio and then connect to SQL Server Instance
2. In Object Explorer, Expand Management Node and then right click SQL Server Logs and click Configure as shown in the snippet below.
Configure SQL Server Error Logs.png 
3. In Configure SQL Server Error Logs window you can enter the value between 6 and 99 for the number of error logs and click OK to save the changes.
How to Increase Number of SQL Server Error Log Files 

How to Increase Number of SQL Server Error Log Files Using TSQL Query

Execute the below mentioned TSQL query in SQL Server to increase the number of SQL Server Error Log file.
USE [master]
GO

EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE'
    ,N'Software\Microsoft\MSSQLServer\MSSQLServer'
    ,N'NumErrorLogs'
    ,REG_DWORD
    ,99
GO
In the below screenshot you could see a new entry added in registry with the name NumErrorLogs and has a value as 99.
NumErrorLogs Registry Entry for Number of SQL Server Error Logs 

Conclusion

This article explains how to increase the number of SQL Server Error Log files in SQL Server 2005 and later versions.

Steps to Connect to SQL Server When all System Administrators are Locked Out

Introduction

SQL Server Security is one of the key responsibilities of a Database Administrator. However, there can be scenarios when a DBA will be asked to manage SQL Server which doesn’t have any valid System Administrator or SA Accounts. This usually happens when a DBA who was managing the SQL Server Instance has left the job or SA account is disabled or when no one remembers the SA password or even when accidentally someone has removed the SA Privileges. This article explains the steps to follow to add a new login with SA privileges. The steps mentioned in this article are applicable to all editions of SQL Server 2005 and new releases.
Let us go though the steps which will help you regain SA privileges on SQL Server where all System Administrators are locked out.

Starting SQL Server in Single User Mode

1. Click Start -> Programs -> Microsoft SQL Server -> Configuration Tools -> SQL Server Configuration Manager
2. In SQL Server Configuration Manager, Stop the SQL Server Service (Database Engine).
3. Once SQL Server Service is stopped, right click the SQL Server Service on the right side panel and click on Properties.
4. In SQL Server 2008 R2 and below versions within the SQL Server Properties screen, click on Advanced Tab. Click and expand Startup Parameters and enter “;-m” as show in the snippet below. To save the configuration changes click OK.
Start SQL Server in Single User Mode 
5. In SQL Server 2012 and higher version Startup Parameters screen has improved and it’s now much easier to add or remove SQL Server Startup options. In the Startup Parameters tab add the “-m” parameter and click OK to save the changes. For more information on this see, Improved Startup Parameters in SQL Server 2012.
Start SQL Server in Single User Mode by adding -m in Startup Parameter Tab in SQL Server 2012 

For more information on adding trace flags for example to identify deadlocks issues see, Identify Deadlocks in SQL Server Using Trace Flag 1222 and 1204.
6. Once you have added “-m” as a startup parameter, SQL Server Service must be restarted to start SQL Server Database Engine in Single-User Mode. For more information on different startup parameters supported in SQL Server, see Database Engine Service Startup Options.
7. Once the SQL Server Service is restarted successfully, SQL Server will start in Single User Mode. This means at this time only one user connection is allowed to the Database Engine. Hence, you must avoid starting SQL Server Agent Service or using SQL Server Object Explorer in SSMS etc.
In case a user connection already exists when SQL Server Database Engine starts in single user mode then you will see the below mentioned error message:-
Login failed for user ''. Reason: Server is in single user mode. Only one administrator can connect at this time. (Microsoft SQL Server, Error: 18461)
For more information on connecting to an SQL Server Instance when SQL Server stops responding to normal connection, see How to Use Dedicated Administrator Connection in SQL Server.

Using SQLCMD connect to SQL Server and create a new user with System Administrator Privileges

Using SQLCMD command line utility, a DBA can create a new login to the SQL Server Instance and grant System Administrative (SA) Privileges.
1. Open Command Prompt and type SQLCMD to open the utility. In SQLCMD you need to specify the below command to establish connection with SQL Server Instance which is running in single user mode.
/* If you have a default instance of SQL Server installed then use the below syntax */
sqlcmd -S <server name> -E

/* If you have a named instance of SQL Server installed then use the below syntax */
sqlcmd -S <server name\instance name> -E

/* To find help with the syntax use the below syntax*/
sqlcmd -?
2. Next step will be to create a new login and grant SA privileges. For this enter the below code and then press ENTER to create the login and grant SA privileges as shown in the snippet below. Type EXIT to come out of SQLCMD window and close the command prompt.
CREATE LOGIN DBA WITH PASSWORD ='Pa44w0rd'
GO

EXEC sp_addsrvrolemember DBA, sysadmin
GO

Using SQLCMD Create New Login and Grant SA Privileges 
3. Final step will be to remove “-m” startup parameter and restart the SQL Server Service. Now open SQL Server Management Studio and connect to SQL Server Instance using the newly created login which has SA privileges.

Next Steps



Read more: http://www.mytechmantra.com/LearnSQLServer/Steps-to-Connect-to-SQL-Server-When-all-System-Administrators-are-Locked-Out/#ixzz3h4bTW5gY
Follow us: @MyTechMantra on Twitter | MyTechMantra on Facebook

How to Enable Database Users in SQL Server

Introduction

During a recent Production Database refresh we encountered a strange problem where all the Windows Groups/Users along with SQL Server Users within the Database were disabled.

When you connect to SQL Server Using System Administrator (SA) permissions and expand User Database -> Security -> Users you would find all of the Windows Groups/Users and SQL Server Users being disabled. A disabled user will have a small red down arrow icon against the user. However, you cannot enable or disable the database users using SQL Server Management Studio. 
You will receive the below mentioned error message when trying to connect to the database.
Error Message
The database AdventureWorks2012 is not accessible.
This issue generally happens when CONNECT permission to the user is disabled for the database.
Follow the steps mentioned below to identify and fix this issue.
Step1: Query to list all users within the database
/*
    Query to list all users within the database

    S = SQL_USER
    U = WINDOWS_USER
    G = WINDOWS_GROUP
*/

Use AdventureWorks
GO

SELECT
    principal_id AS [Principal ID]
    ,name AS [Database User Name]
    ,type AS [User Type]
    ,type_desc AS [User Type Description]
FROM sys.database_principals
    WHERE TYPE IN ('G','U','S')
        ORDER BY type ASC
GO
Step 2: Find all users within the database which are disabled.
You can find all the users within the database which are disabled either by using SSMS or using TSQL.
The below query will find all the Windows Groups/users and SQL Users which are in disabled state within the user database.
/*
Find all users within the database which are disabled.

    hasdbaccess = 0 Means User has no access
    hasdbaccess = 1 Means User has access
*/

USE AdventureWorks
GO

SELECT
    SU.NAME
    ,DP.principal_id
    ,dp.type
    ,dp.type_desc
    ,su.hasdbaccess
FROM sys.database_principals DP
INNER JOIN sys.sysusers SU ON dp.principal_id = SU.uid
    WHERE DP.TYPE IN ('G','U','S')
AND SU.hasdbaccess = 0
    ORDER BY DP.TYPE ASC
GO
Find all users within the database which are disabled 
A disabled user will have a red arrow point downwards as seen in the below snippet.
Disabled user will have a red arrow point downwards in SQL Server 

Step 3: Generate Script to enable all Users who are disabled within the user database
/*
Generate Script to Grant Connect to Database Users

U = WINDOWS_USER
G = WINDOWS_GROUP

*/

Use AdventureWorks
GO

SELECT 'GRANT CONNECT TO [' + SU.name + '];' FROM sys.database_principals DP
INNER JOIN sys.sysusers SU ON dp.principal_id = SU.uid
WHERE DP.TYPE IN ('G','U')
AND SU.hasdbaccess = 1
GO
Sample Output
GRANT CONNECT TO [Domain\User1];
GRANT CONNECT TO [Domain\AdminGroup];
Open a New Query window and run the above script under the context of the database to GRANT CONNECT to user and to resolve the issue.


Read more: http://www.mytechmantra.com/LearnSQLServer/How-to-Enable-Database-Users-in-SQL-Server/#ixzz3h4bCpsE4
Follow us: @MyTechMantra on Twitter | MyTechMantra on Facebook

How to Configure TempDB on Local Disk in SQL Server 2012 Failover Cluster to Improve Performance

Introduction

Starting SQL Server 2012 Failover Cluster installation supports Local Disk to be used for TempDB Data and Log files. This article explains the steps to be followed by the DBA to configure SQL Server 2012 Failover Cluster to use Local Disk for TempDB Data and Log Files.

Advantages of TempDB Located on Local Disk in SQL Server Failover Cluster

  • If you are experiencing heavy TempDB Usage then by having TempDB data and log files stored on Local Disk can help you achieve significant performance benefits.
  • Having TempDB Data and Log files located on Local Disk will help you reduce the I/O requests from shared storage and thereby helps to improve the performance of shared storage considerable.
  • Cost of Solid State Disks (SSDs) is falling and many organizations who wish to obtain better performance are now looking forward to use SSDs for TempDB data and log files in Failover Cluster configurations

How to Configure SQL Server 2012 Failover Cluster TempDB Local Directory During Installation

During SQL Server 2012 Failover Cluster installation DBA will have to specify the Local Disk for TempDB Data and Log Files.
Configure TempDB on Local Disk in SQL Server 2012 Failover Cluster to Improve Performance

Warning Message

You may end up seeing a warning message within the Database Engine Configuration screen as shown in the snippet below.
You have specified a local directory as the TempDB data or log directory for a SQL Server Cluster. To avoid possible failures during a failover, you must make sure that the same directory exists on each cluster node and grant read/write permission to SQL Server service.”
To avoid possible failures during a failover, you must make sure that the same directory exists on each cluster node and grant read/write permission to SQL Server service.”
Click OK and continue with rest of the installation on Primary Node of the Failover Cluster.

How to Configure SQL Server 2012 Failover Cluster TempDB Local Directory During Installation on Secondary Node

1. On the Secondary Node of Failover Cluster launch SQL Server 2012 setup and click on Add node to a SQL Server Failover Cluster option on the Installation Page to “Launch wizard to add a node to an existing SQL Server 2012 Failover Cluster”.
2. Continue with installation by selecting the same settings which was used for configuring the first node. Here, there is no need to specify the path to any disk or directories while installing SQL Server 2012 on the secondary node.
3. Once the Failover Cluster Installation is completed successfully on the Secondary Node then create the same folder structure on Secondary Node where TempDB Data and Log file will reside i.e., M:\MSSQL11\MSSQL\Data (in this example) and grant read/write permission to SQL Server Service.

Action Item

  • Perform SQL Server Failover Cluster Validation by initiating the failover from Primary to Secondary Node and Vice-Versa. If you haven’t created the same folder structure on secondary node and have granted grant read/write permission then the SQL Server Service will not start on the secondary node.
  • In Event Viewer look for Error Events such as 5123, 17204 or any other messages related to the SQL Server Resource not coming online. 
  • Learn How to Move TempDB to Another Drive in SQL Server.


Read more: http://www.mytechmantra.com/LearnSQLServer/Configure-TempDB-on-Local-Disk-in-SQL-Server-2012-Failover-Cluster-to-Improve-Performance/#ixzz3h4aLGdUa
Follow us: @MyTechMantra on Twitter | MyTechMantra on Facebook

Monitoring Transactional Log File Space Utilization in SQL Server

Introduction

As a Best Practice database administrator should always monitor the space usage within the SQL Server Transaction Log file. This will help you quickly understand how much space the Transaction Log file is using during huge data loads or during routine daily database maintenance activities. The space used by Transaction Log file will depend upon theDatabase Recovery Model and how frequently the Transaction Log backups are performed. This article explains how to monitor SQL Server transactional log file space usage over a period of time with the help of an SQL Server Agent Job.

Overview of DBCC SQLPERF (LOGSPACE)

Using DBCC SQLPERF (LOGSPACE) command one can view the transaction log space usage statistics for all databases. Using this command one can also reset wait and latch statistics.
Recommendation:- Configure Auto Growth for Data and Log Files. For more information see, How to Change SQL Server Database Auto Growth Settings
A sample output of DBCC SQLPERF (LOGSPACE) command is shown below:-
DBCC SQLPERF(LOGSPACE)
Find below the description of each column in the result set:
  • Database Name: - Name of the database for which the log statistics is displayed.
  • Log Size (MB): - Current size of transaction log file in MB.
  • Log Space Used (%): - Percentage of the log file currently occupied with transaction log information.
  • Status: - Status of the log file. Always set to 0.

Permissions Required to Execute DBCC SQLPERF (LOGSPACE)

To execute DBCC SQLPERF (LOGSPACE) a user will require VIEW SERVER STATE permission on the server.

Let us go through in details the TSQL code which will help you monitor the Transactional Log space usage.

Quick explanation of the TSQL code

1. Creates a new database by the name DBA
2. Creates a new table by the name MonitorTransactionLogFileUsage within DBA database.
3. Creates a stored procedure by the name CaptureTransactionLogFileUsage within DBA Database.
4. Create a SQL Server Agent Job to execute CaptureTransactionLogFileUsage stored procedure. Schedule the job to capture transactional log file utilization once in every 10 minute intervals on a minimum.

Step 1:- Create DBA Database

USE master;
GO

IF DB_ID (N'DBA') IS NOT NULL
 DROP DATABASE DBA;
GO

CREATE DATABASE DBA
GO

Step 2:- Create MonitorTransactionLogFileUsage Table in DBA Database

Use DBA
GO

IF EXISTS 
 (
  SELECT *
  FROM sys.objects
  WHERE object_id = OBJECT_ID(N'[dbo].[MonitorTransactionLogFileUsage]')
  AND TYPE IN (N'U')
 )
 DROP TABLE [dbo].MonitorTransactionLogFileUsage
GO

CREATE TABLE dbo.MonitorTransactionLogFileUsage (
 ID INT IDENTITY(1, 1)
 ,DatabaseName SYSNAME
 ,LogSizeInMB DECIMAL(18, 5)
 ,LogSpaceUsedInPercentage DECIMAL(18, 5)
 ,[Status] INT
 ,RecordTime DATETIME DEFAULT GETDATE()
 )
GO

Step 3:- Create CaptureTransactionLogFileUsage Stored Procedure in DBA Database to store DBCC SQLPERF (LOGSPACE) results

Use DBA
GO

CREATE PROCEDURE CaptureTransactionLogFileUsage
AS
BEGIN
 INSERT INTO dbo.MonitorTransactionLogFileUsage
  (
   DatabaseName
  ,LogSizeInMB
  ,LogSpaceUsedInPercentage
  ,[Status]
  )
 EXEC ('DBCC SQLPERF(LOGSPACE)')
END
GO

Step 4:- Capture Transactional Log File Usage

As a best practice a database administrator can create a new SQL Server Agent Job with the below TSQL code and schedule it to run multiple times during the day. However, we recommend you to run the job once every 10 minutes so that you have considerable amount of history to compare the transactional file growth over a certain period of time.
Use DBA
GO

EXEC CaptureTransactionLogFileUsage
GO

Analysing Transactional Log File Usage

Execute the below query to identify the transactional log file usage of Staging database over the last few days.
USE DBA
GO

SELECT   ID
 , DatabaseName
 , LogSizeInMB
 , LogSpaceUsedInPercentage
 , RecordTime
FROM MonitorTransactionLogFileUsage
 WHERE (DatabaseName = 'Staging')
ORDER BY RecordTime DESC
GO
DBCC SQLPERF (LOGSPACE) Output”
DBA's top most priority is to ensure databases are up and running all the time. In order to effectively manage the transaction log growth it is very important to periodically it. Using the code mentioned in this article a DBA can effective monitor the effective growth of TLOG file. We would recommend scheduling the job to run once in every 10 minutes or even lower intervals on busy systems to idenfify the pattern of TLOG growth.


Read more: http://www.mytechmantra.com/LearnSQLServer/Monitoring-Transactional-Log-File-Space-Utilization-in-SQL-Server/#ixzz3h4Zi6Wen
Follow us: @MyTechMantra on Twitter | MyTechMantra on Facebook

Identify Database Bottlenecks with Performance Monitor Counters


Performance monitor is a free tool provided by Microsoft windows and is very useful to analyse real time and historical performance information of your database servers and windows servers. So if you don’t have third party tool and you want to monitor performance of CPU, Disk, Memory or Database Server then this is one of the best tool for you because it is available absolutely free and easy to access as it comes in bundle with your Microsoft windows.
Histogram Bar
Figure 1: Histogram Bar
To access performance monitor you can navigate to Control Panel –> Administrative Tools –> Performance Monitor or you can also access it from windows Start Menu –> typeperfmon.exe in Start search box and press Enter. It will open Performance Monitor window where you have to add counters by clicking green plus symbol (+) or right click on performance window and select Add counters… It will pop up add counters window where you can choose and add counters. You can highlight counter name and select Show description to understand the actual meaning and work of that counter.
Add Counter
Figure 2: Add Counters
We can divide performance counters in four major category as CPU, Disk, Memory and SQL Server counters. You can change the graph type in different view format like Line, Histogram bar and Report. You have option to choose graph type by pressing Ctrl+G to switch between different views instantly. Below is a report view of counters.
Report View
Figure 3: Report View
As there are hundreds of counters, we will discuss few important and commonly used counters for database server monitoring.

Buffer Cache Hit Ratio

You can describe it as percentage of pages that were found in the buffer pool without having to incur a read from disk. The ideal value for buffer cache hit ratio is anything above 98% and if it is above 90% then it can be considered as satisfactory and value below 90% may indicate slowdatabase performance and increased I/O and you have to keep an eye on this counter and if required you can add more memory as it indicates memory bottlenecks.
This counter is available under SQLServer:Buffer Manager

Page Life Expectancy

It is described as number of seconds a page will stay in the buffer pool without references. Ideal value for this counter is 300 seconds which means page life expectancy with 300 seconds will keep pages for 300 seconds in memory before flushing it to disk. So 300/60=5 minutes, so if this counter drops below 5 minutes (300 seconds) it indicates SQL Server is facing memory bottleneck and you have to monitor it continuously and increase the memory accordingly.
This counter is available under SQLServer:Buffer Manager

Avg. Disk sec/Read

Avg. Disk sec/Read is the average time, in seconds, of a read of data from the disk. It measures how fast a disk responds when data is accessed from disk. Ideal value for this counter is anything below 10 ms and if latency is upto 30 ms then it is considered as fine. If you face this bottleneck then consider to use faster disk drives, use multiple controllers, keep log file (.ldf) on separate disk  from primary and secondary data files (.mdf and .ndf files). In some cases you need to verify queries and tune it accordingly.
This counter is available under LogicalDisk.

Avg. Disk sec/Write

Avg. Disk sec/Write is the average time, in seconds, of a write of data to the disk. It measures how fast a disk responds when data is written to disk. Ideal value for this counter is anything below 10 ms and if latency is upto 30 ms then it is considered as fine. If you face this bottleneck then consider to use faster disk drives, use multiple controllers, keep log file (.ldf) on separate disk  from primary and secondary data files (.mdf and .ndf files). In some cases you need to verify queries and tune it accordingly. You can ignore spikes which is coming sometime or rarely.
This counter is available under LogicalDisk.

Disk Transfers/sec

Disk Transfers/sec is the rate of read and write operations on the disk. It helps to identify I/O per second on logical disk. This counter should not exceed the capacity of your disk subsystem as per mentioned I/O per second. You have to refer disk avg. read time and disk avg. write time counters along with this counter to come to conclusion for I/O bottleneck.
This counter is available under LogicalDisk.

User Connections

This counter provides number of users connected to the SQL Server instance. As number of connections increases on the server, it specifies that server is getting busy however total user connections may or may not be same as number of users. A single user may have multiple connections and multiple users may have single connection depending upon scenarios. User connections may be three to four times as compared to worker threads. You have to monitor continuously this counter over the time to understand usage of your database server.
This counter is available under SQLServer:General Statistics.

% Processor Time

% Processor Time is the percentage of elapsed time that all of process threads used the processor to execution instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. Ideal value for % processor time is below 50% for best performance however if it is under 80% then it can be consider as satisfactory but if it is exceeding 80% for continuous 5 minutes or more then you have to identify CPU bottleneck also keep in mind that spikes above 80% for short period of time is normal and can be ignored.
This counter is available under Process.

Total Server Memory (KB)

You can define it as total amount of dynamic memory the server is currently consuming. If SQL Server total memory is relatively high as compared to total physical memory in the server then you have to add more memory. But before coming to decision also check other related counters like target memory, buffer cache hit ratio, buffer page life expectancy, buffer free pages etc.
This counter is available under SQLServer:Memory Manager

Target Server Memory (KB)

This is ideal amount of memory the SQL Server is willing to consume. This value cannot exceed the maximum server memory settings. SQL Server Database Engine consumes as much memory as it requires while startup and if it is require more memory and physical memory is available then it consumes it and target memory increases without creating memory bottleneck. If relatively target memory is high as compared to total physical memory then you can add more physical memory but again you have to consider other counters like total memory, buffer cache hit ratio, buffer page life expectancy, buffer free pages etc. before coming to conclusion.
This counter is available under SQLServer:Memory Manager

Lock Waits/sec

Number of lock requests that could not be satisfied immediately and required the caller to wait before being granted the lock. To improve the performance of database there should be less locks during transactions. You need to understand your database system and require continuous monitoring over the time and keep historical records for locks so that you can compare it with baseline to determine normal value of locks. If lock waits/sec is increasing then it is pointing to resource contention and you need to review number of concurrent connection, indexing etc. Also check buffer cache hit ratio, avg. disk queue length. If you are getting more than zero value then it is pointing to some sort of blocking is happening.
This counter is available under SQLServer:Locks

Average Wait Time (ms)

It can be defined as the average amount of wait time (milliseconds) for each lock request that resulted in a wait. To restrict simultaneous access on resources by another transactions locks are used. If a transaction holds an exclusive lock then other transaction cannot access it until it is released. To improve the performance of SQL Server you need minimum number of locks. Average wait times of above 500 ms indicating excessive blocking and you need to keep an eye on it and investigate the actual cause. You can also refer the counters Lock wait/sec and Lock timeouts/sec to come to conclusion.
This counter is available under SQLServer:Locks
As there as hundreds of counters, you can go through all the performance monitor counters as these are very useful and important for database server monitoring and performance tuning.
Now after adding required counters you can switch to Line graph, it will look like as given below.
Line Graph
Figure 4: Line Graph
To refresh the counters values, you can right click and select Clear. If you want to remove the particular counter, you can go to Add Counters list and remove it or you can directly delete it by selecting it and press delete button on keyboard.
Note: Above given counter values are my personal experience and it may differ in other environments, also few definitions of counters have been referred from description field of add counters page.