Monday, August 19, 2013

SQL Server Wait Types – Trace and Tune Database Performance Issues

It is a big challenge for database administrators to trace performance issues, tune and maintain database server effectively. I receive many queries regarding database performance issues, some issues come after migration, some issues are adhoc and some when database grows after a period. Primary objective for a DBA is to trace and tune performance issues periodically. Generate regular reports, compare new statistics with old statistics and find out the improvement possibilities. In some cases tunings are possible but in some scenarios tunings are not possible and you have to upgrade your hardware to resolve the problems. Here we are going to find out wait statistics to trace performance issues.
Normally, front end users submit their requests and expects that they get output instantly but scenario is different behind the server. That request goes from front end application to back-end server and processes data from database and sends the output to end users. Now, this request involves processing time, wait time, idle time and total response time etc. You can understand in a large environment where thousands of users are involved requesting different data from server, how can be the actual situation.
So, here we are going to discuss about couple of DMVs which provides historical and real time information about bottlenecks. SQL Server 2012 has 649 wait types, out of which few are very useful to find the performance issues and few are less useful depending on scenarios and situations. There are different categories of dynamic management objects like sys.dm_exec_XX, sys.dm_os_XX, sys.dm_tran_XX, sys.dm_db_XX, sys.dm_io_XX etc. where XX replaces full name of DMOs.
You can check the DMVs list in Object Explorer –> go to your database –> Views –> System Views.
DMV
Here we are going to discuss two dynamic management views (DMVs).
– sys.dm_os_wait_stats
– sys.dm_os_waititng_tasks

sys.dm_os_wait_stats

This dynamic management view (DMV) provides historical information about how long a task was waiting for particular wait type after last statistics reset or database instance restarted.
You need to collect lots of statistics over a period of time to come to any conclusion for issues with your historical data.
SELECT * FROM sys.dm_os_wait_stats
ORDER BY wait_time_ms DESC
GO
This DMV has five columns as:
wait_type:
It provides name of the wait type, it is categorised into resource waits, queue waits and external waits.
waiting_tasks_count:
It indicates how frequently the wait is occurring and provides number of waits and counter value is incremental. Higher counts help to trace issues easily.
wait_time_ms:
This is total wait time including signal_wait_time_ms. As soon as execution stops and task is waiting for resources, the value increases.
max_wait_time_ms:
It gives information about maximum wait time in milliseconds on particular wait type.
signal_wait_time_ms:
It indicates time duration between thread run request to running state.
You can reset the wait stats with below query.
DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR)

sys.dm_os_waiting_tasks

This dynamic management view provides real time information about a task which is waiting for resource availability and is very useful to track the bottleneck. When a task is waiting for resource, it gets moved to waiting list and when resource is available the task is moved to running group and waits for execution request.
You can use this DMV when you observe server slowness or when query execution takes abnormal time.
SELECT *FROM sys.dm_os_waiting_tasks
ORDER BY wait_duration_ms DESC
GO
or you can join above query with sys.dm_exec_sessions and execute the same as shown below.
SELECT DES.login_name, DES.program_name, DES.client_interface_name, 
OWT.wait_type, OWT.wait_duration_ms FROM sys.dm_os_waiting_tasks as OWT
INNER JOIN sys.dm_exec_sessions as DES
ON OWT.session_id = DES.session_id
WHERE DES.is_user_process = 1 AND OWT.wait_duration_ms > 10000
Above query uses sys.dm_os_waiting_tasks and sys.dm_exec_sessions DMVs and provides users tasks list which are waiting for more than 10 seconds.
As I already said there are 649 wait types in SQL Server 2012 so it is difficult to explain all wait types. I am listing here few wait types which are commonly traced.
CXPACKET:
CXPACKET wait type is observed when query uses parallelism. It does not indicate any bottleneck directly but indicates when a session is waiting for any synchronization and query is taking more than normal time. You need to check your queries for missing indexes and WHERE clause conditions.
SOS_SCHEDULER_YIELD:
When CPU utilization is high due to SQL Server at that time you can observe SOS_SCHEDULER_YIELD wait type and you need to investigate queries and other related parameters.
ASYNC_NETWORK_IO:
As the name suggests it is related with network related issues and may also relate to slow user applications so you need to check your large processing queries.
WRITELOG:
This wait type is observed when SQL Server is waiting to flush the logs to transaction log. It may indicate disk bottleneck or require to add more storage but before that you need to investigate further.
Wait stats is a huge topic and it is not possible to cover in a single post hence over the time remaining wait types and explanation will be published in detail.

Identify Database Bottlenecks with Performance Monitor Counters

  • Performance monitor is a free tool provided by Microsoft windows and is very useful to analyse real time and historical performance information of your database servers and windows servers. So if you don’t have third party tool and you want to monitor performance of CPU, Disk, Memory or Database Server then this is one of the best tool for you because it is available absolutely free and easy to access as it comes in bundle with your Microsoft windows.
Histogram Bar
Figure 1: Histogram Bar
To access performance monitor you can navigate to Control Panel –> Administrative Tools –> Performance Monitor or you can also access it from windows Start Menu –> typeperfmon.exe in Start search box and press Enter. It will open Performance Monitor window where you have to add counters by clicking green plus symbol (+) or right click on performance window and select Add counters… It will pop up add counters window where you can choose and add counters. You can highlight counter name and select Show description to understand the actual meaning and work of that counter.
Add Counter
Figure 2: Add Counters
We can divide performance counters in four major category as CPU, Disk, Memory and SQL Server counters. You can change the graph type in different view format like Line, Histogram bar and Report. You have option to choose graph type by pressing Ctrl+G to switch between different views instantly. Below is a report view of counters.
Report View
Figure 3: Report View
As there are hundreds of counters, we will discuss few important and commonly used counters for database server monitoring.

Buffer Cache Hit Ratio

You can describe it as percentage of pages that were found in the buffer pool without having to incur a read from disk. The ideal value for buffer cache hit ratio is anything above 98% and if it is above 90% then it can be considered as satisfactory and value below 90% may indicate slow database performance and increased I/O and you have to keep an eye on this counter and if required you can add more memory as it indicates memory bottlenecks.
This counter is available under SQLServer:Buffer Manager

Page Life Expectancy

It is described as number of seconds a page will stay in the buffer pool without references. Ideal value for this counter is 300 seconds which means page life expectancy with 300 seconds will keep pages for 300 seconds in memory before flushing it to disk. So 300/60=5 minutes, so if this counter drops below 5 minutes (300 seconds) it indicates SQL Server is facing memory bottleneck and you have to monitor it continuously and increase the memory accordingly.
This counter is available under SQLServer:Buffer Manager

Avg. Disk sec/Read

Avg. Disk sec/Read is the average time, in seconds, of a read of data from the disk. It measures how fast a disk responds when data is accessed from disk. Ideal value for this counter is anything below 10 ms and if latency is upto 30 ms then it is considered as fine. If you face this bottleneck then consider to use faster disk drives, use multiple controllers, keep log file (.ldf) on separate disk  from primary and secondary data files (.mdf and .ndf files). In some cases you need to verify queries and tune it accordingly.
This counter is available under LogicalDisk.

Avg. Disk sec/Write

Avg. Disk sec/Write is the average time, in seconds, of a write of data to the disk. It measures how fast a disk responds when data is written to disk. Ideal value for this counter is anything below 10 ms and if latency is upto 30 ms then it is considered as fine. If you face this bottleneck then consider to use faster disk drives, use multiple controllers, keep log file (.ldf) on separate disk  from primary and secondary data files (.mdf and .ndf files). In some cases you need to verify queries and tune it accordingly. You can ignore spikes which is coming sometime or rarely.
This counter is available under LogicalDisk.

Disk Transfers/sec

Disk Transfers/sec is the rate of read and write operations on the disk. It helps to identify I/O per second on logical disk. This counter should not exceed the capacity of your disk subsystem as per mentioned I/O per second. You have to refer disk avg. read time and disk avg. write time counters along with this counter to come to conclusion for I/O bottleneck.
This counter is available under LogicalDisk.

User Connections

This counter provides number of users connected to the SQL Server instance. As number of connections increases on the server, it specifies that server is getting busy however total user connections may or may not be same as number of users. A single user may have multiple connections and multiple users may have single connection depending upon scenarios. User connections may be three to four times as compared to worker threads. You have to monitor continuously this counter over the time to understand usage of your database server.
This counter is available under SQLServer:General Statistics.

% Processor Time

% Processor Time is the percentage of elapsed time that all of process threads used the processor to execution instructions. An instruction is the basic unit of execution in a computer, a thread is the object that executes instructions, and a process is the object created when a program is run. Code executed to handle some hardware interrupts and trap conditions are included in this count. Ideal value for % processor time is below 50% for best performance however if it is under 80% then it can be consider as satisfactory but if it is exceeding 80% for continuous 5 minutes or more then you have to identify CPU bottleneck also keep in mind that spikes above 80% for short period of time is normal and can be ignored.
This counter is available under Process.

Total Server Memory (KB)

You can define it as total amount of dynamic memory the server is currently consuming. If SQL Server total memory is relatively high as compared to total physical memory in the server then you have to add more memory. But before coming to decision also check other related counters like target memory, buffer cache hit ratio, buffer page life expectancy, buffer free pages etc.
This counter is available under SQLServer:Memory Manager

Target Server Memory (KB)

This is ideal amount of memory the SQL Server is willing to consume. This value cannot exceed the maximum server memory settings. SQL Server Database Engine consumes as much memory as it requires while startup and if it is require more memory and physical memory is available then it consumes it and target memory increases without creating memory bottleneck. If relatively target memory is high as compared to total physical memory then you can add more physical memory but again you have to consider other counters like total memory, buffer cache hit ratio, buffer page life expectancy, buffer free pages etc. before coming to conclusion.
This counter is available under SQLServer:Memory Manager

Lock Waits/sec

Number of lock requests that could not be satisfied immediately and required the caller to wait before being granted the lock. To improve the performance of database there should be less locks during transactions. You need to understand your database system and require continuous monitoring over the time and keep historical records for locks so that you can compare it with baseline to determine normal value of locks. If lock waits/sec is increasing then it is pointing to resource contention and you need to review number of concurrent connection, indexing etc. Also check buffer cache hit ratio, avg. disk queue length. If you are getting more than zero value then it is pointing to some sort of blocking is happening.
This counter is available under SQLServer:Locks

Average Wait Time (ms)

It can be defined as the average amount of wait time (milliseconds) for each lock request that resulted in a wait. To restrict simultaneous access on resources by another transactions locks are used. If a transaction holds an exclusive lock then other transaction cannot access it until it is released. To improve the performance of SQL Server you need minimum number of locks. Average wait times of above 500 ms indicating excessive blocking and you need to keep an eye on it and investigate the actual cause. You can also refer the counters Lock wait/sec and Lock timeouts/sec to come to conclusion.
This counter is available under SQLServer:Locks
As there as hundreds of counters, you can go through all the performance monitor counters as these are very useful and important for database server monitoring and performance tuning.
Now after adding required counters you can switch to Line graph, it will look like as given below.
Line Graph
Figure 4: Line Graph
To refresh the counters values, you can right click and select Clear. If you want to remove the particular counter, you can go to Add Counters list and remove it or you can directly delete it by selecting it and press delete button on keyboard.
Note: Above given counter values are my personal experience and it may differ in other environments, also few definitions of counters have been referred from description field of add counters page.

CLICK HERE to watch live practical.

Reference: Manzoor Siddiqui [www.SQLServerLog.com]

Log Shipping Configuration with SQL Server 2012 Step by Step

Log shipping is a low cost technique for disaster recovery solution which you can setup on primary server database and one or more secondary server databases. This is basically a three step process backup, copy and restore involved in to configure log shipping. Once this technique is implemented it will automatically backup the transaction log on primary server as per job schedule then it will copy the transaction log on secondary server and finally it will restore log backups. You should have sysadmin rights on the server to implement log shipping and read write permission on folder involved to hold transaction log.
SQLServerLogShipping
It is better to have same version of SQL Server on both side of primary and secondary server so that you can do manual failover. You can do failover from lower version of SQL Server to higher version of SQL Server but reverse is not possible. Also your database must be in full recovery model or bulk-logged recovery model to configure and implement log shipping.
Now we will see step by step process how to configure log shipping in SQL Server 20012. For demonstration purpose I am going to use ‘Inventory’ database.
First of all verify if your database recovery model is full or bulk-logged with below query.
--Check Recovery Model of Database
USE [master]
GO
SELECT name, compatibility_level, recovery_model, recovery_model_desc, state_desc
FROM sys.databases WHERE name = 'Inventory'
GO
If it is simple recovery model then alter the same with below query.
USE [master]
GO
ALTER DATABASE [Inventory] SET RECOVERY FULL WITH NO_WAIT
GO
Now take the full backup of Inventory database as we are going to use this database as log shipping on secondary database server after restore. [Click here to know how to backup a database.]
Now go to secondary server and restore database ‘Inventory’ with set recovery state as RESTORE WITH STANDBY mode. [Click Here to know how to restore database.]
Restore Database on Secondary Server with Standby
Figure 2: Restore Database on Secondary Server with Standby mode
On primary server, right click on ‘Inventory’ database and click on Properties.
Select Transaction Log Shipping page and check mark ‘Enable this as a primary database in a log shipping configuration’. [refer Figure-3]
Database Properties
Figure 3: Database Properties
Click on Backup Settings… New window of Transaction Log Backup Settings will pop up where you have to specify network path to backup folder or if the backup folder is located on the primary server then put local folder path, then click OK. [refer Figure-4]
Transaction Log Backup Settings
Figure 4: Transaction Log Backup Settings
Make sure that you have assigned read and write permission on this folder to SQL Server service account then click OK.
In next step you have to map secondary server instance and database so click on Add button in secondary databases option and you will get below screen.
Secondary Database Settings
Figure 5: Secondary Database Settings – Initialize Secondary Database
Now click Connect… button to map Secondary Server Instance and Database.
On this page, you have to configure three tabs which are enabled as given below.
– Initialize Secondary Database
– Copy Files
– Restore Transaction Log
In Initialize Secondary Database tab, select option ‘No, the secondary database is initialised’, because we have already restored database on secondary server in our previous step. [refer Figure-5]
Now in Copy Files tab, specify destination folder for copied files i.e. you can mention network shared folder path then select Restore Transaction Log tab.
Secondary Database Settings - Copy Files
Figure 6: Secondary Database Settings – Copy Files
Choose Standby mode in database state when restoring backups and select ‘Disconnect users in the database when restoring backups’. Now click on OK button. It will create a job to restore transaction log.
Secondary Database Settings - Restore Transaction Log
Figure 7: Secondary Database Settings – Restore Transaction Log
So, finally we have configured Log Shipping, here I am not setting up monitor server instance as this is optional and also I do not have requirement but if you require you can select ‘Use a monitor server instance’ and do the necessary settings. [refer Figure-8]
Database Properties Configuration Completed
Figure 8: Database Properties Configuration Completed
Once you will click OK our configured settings will be saved and  scheduled jobs will be created.
9_SaveLogShippingConfiguration
You can verify the log shipping configuration details by executing below query.
Log Shipping Status
Figure 10: Log Shipping Status
USE [master]
GO
EXEC sp_help_log_shipping_primary_database @database = 'Inventory'
GO
After successful log shipping configuration multiple jobs are created on primary and secondary server instance. You can verify the jobs from SQL Server Agents node.
Jobs Created on Primary and Secondary Server
Figure 11: Jobs Created on Primary and Secondary Servers
Finally as per job schedule all the transaction log will be copied and restored to our secondary server database. You can insert some records in Inventory database tables and verify the same.

Configure Log Shipping by T-SQL

If you want to configure log shipping by T-SQL then you can do the same by following below steps.
– Take Full Backup of database from primary server database.
– Restore database on secondary server in standby mode.
– Execute below query on primary server under master database so that it will setup and create required jobs on primary server.
-- Execute the following statements at the Primary to configure Log Shipping 
-- for the database [Manzoor].[Inventory],
-- The script needs to be run at the Primary in the context of the [msdb] database. 
------------------------------------------------------------------------------------- 
-- Adding the Log Shipping configuration
-- ****** Begin: Script to be run at Primary: [Manzoor] ******
DECLARE @LS_BackupJobId AS uniqueidentifier 
DECLARE @LS_PrimaryId AS uniqueidentifier 
DECLARE @SP_Add_RetCode As int
EXEC @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database 
 @database = N'Inventory' 
 ,@backup_directory = N'C:\sidd\LogShip' 
 ,@backup_share = N'\\MANZOOR\LogShip' 
 ,@backup_job_name = N'LSBackup_Inventory' 
 ,@backup_retention_period = 4320
 ,@backup_compression = 2
 ,@backup_threshold = 60 
 ,@threshold_alert_enabled = 1
 ,@history_retention_period = 5760 
 ,@backup_job_id = @LS_BackupJobId OUTPUT 
 ,@primary_id = @LS_PrimaryId OUTPUT 
 ,@overwrite = 1
IF (@@ERROR = 0 AND @SP_Add_RetCode = 0) 
BEGIN
DECLARE @LS_BackUpScheduleUID As uniqueidentifier 
DECLARE @LS_BackUpScheduleID AS int
EXEC msdb.dbo.sp_add_schedule 
 @schedule_name =N'LSBackupSchedule_Manzoor1' 
 ,@enabled = 1 
 ,@freq_type = 4 
 ,@freq_interval = 1 
 ,@freq_subday_type = 4 
 ,@freq_subday_interval = 15 
 ,@freq_recurrence_factor = 0 
 ,@active_start_date = 20150621 
 ,@active_end_date = 99991231 
 ,@active_start_time = 0 
 ,@active_end_time = 235900 
 ,@schedule_uid = @LS_BackUpScheduleUID OUTPUT 
 ,@schedule_id = @LS_BackUpScheduleID OUTPUT
EXEC msdb.dbo.sp_attach_schedule 
 @job_id = @LS_BackupJobId 
 ,@schedule_id = @LS_BackUpScheduleID
EXEC msdb.dbo.sp_update_job 
 @job_id = @LS_BackupJobId 
 ,@enabled = 1
END
EXEC master.dbo.sp_add_log_shipping_alert_job
EXEC master.dbo.sp_add_log_shipping_primary_secondary 
 @primary_database = N'Inventory' 
 ,@secondary_server = N'MANZOOR\SQL2' 
 ,@secondary_database = N'Inventory' 
 ,@overwrite = 1
-- ****** End: Script to be run at Primary: [Manzoor] ******
Now execute the below query on secondary server under master database so that it will setup and create required jobs on secondary server.
-- Execute the following statements at the Secondary to configure Log Shipping 
-- for the database [MANZOOR\SQL2].[Inventory],
-- the script needs to be run at the Secondary in the context of the [msdb] database. 
------------------------------------------------------------------------------------- 
-- Adding the Log Shipping configuration
-- ****** Begin: Script to be run at Secondary: [MANZOOR\SQL2] ******
DECLARE @LS_Secondary__CopyJobId AS uniqueidentifier 
DECLARE @LS_Secondary__RestoreJobId AS uniqueidentifier 
DECLARE @LS_Secondary__SecondaryId AS uniqueidentifier 
DECLARE @LS_Add_RetCode As int
EXEC @LS_Add_RetCode = master.dbo.sp_add_log_shipping_secondary_primary 
 @primary_server = N'Manzoor' 
 ,@primary_database = N'Inventory' 
 ,@backup_source_directory = N'\\MANZOOR\LogShip' 
 ,@backup_destination_directory = N'\\MANZOOR\LogShip' 
 ,@copy_job_name = N'LSCopy_Manzoor_Inventory' 
 ,@restore_job_name = N'LSRestore_Manzoor_Inventory' 
 ,@file_retention_period = 4320 
 ,@overwrite = 1 
 ,@copy_job_id = @LS_Secondary__CopyJobId OUTPUT 
 ,@restore_job_id = @LS_Secondary__RestoreJobId OUTPUT 
 ,@secondary_id = @LS_Secondary__SecondaryId OUTPUT
IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 
BEGIN
DECLARE @LS_SecondaryCopyJobScheduleUID As uniqueidentifier 
DECLARE @LS_SecondaryCopyJobScheduleID AS int
EXEC msdb.dbo.sp_add_schedule 
 @schedule_name =N'DefaultCopyJobSchedule' 
 ,@enabled = 1 
 ,@freq_type = 4 
 ,@freq_interval = 1 
 ,@freq_subday_type = 4 
 ,@freq_subday_interval = 15 
 ,@freq_recurrence_factor = 0 
 ,@active_start_date = 20150621 
 ,@active_end_date = 99991231 
 ,@active_start_time = 0 
 ,@active_end_time = 235900 
 ,@schedule_uid = @LS_SecondaryCopyJobScheduleUID OUTPUT 
 ,@schedule_id = @LS_SecondaryCopyJobScheduleID OUTPUT
EXEC msdb.dbo.sp_attach_schedule 
 @job_id = @LS_Secondary__CopyJobId 
 ,@schedule_id = @LS_SecondaryCopyJobScheduleID
DECLARE @LS_SecondaryRestoreJobScheduleUID As uniqueidentifier 
DECLARE @LS_SecondaryRestoreJobScheduleID AS int
EXEC msdb.dbo.sp_add_schedule 
 @schedule_name =N'DefaultRestoreJobSchedule' 
 ,@enabled = 1 
 ,@freq_type = 4 
 ,@freq_interval = 1 
 ,@freq_subday_type = 4 
 ,@freq_subday_interval = 15 
 ,@freq_recurrence_factor = 0 
 ,@active_start_date = 20150621 
 ,@active_end_date = 99991231 
 ,@active_start_time = 0 
 ,@active_end_time = 235900 
 ,@schedule_uid = @LS_SecondaryRestoreJobScheduleUID OUTPUT 
 ,@schedule_id = @LS_SecondaryRestoreJobScheduleID OUTPUT
EXEC msdb.dbo.sp_attach_schedule 
 @job_id = @LS_Secondary__RestoreJobId 
 ,@schedule_id = @LS_SecondaryRestoreJobScheduleID
END
DECLARE @LS_Add_RetCode2 As int
IF (@@ERROR = 0 AND @LS_Add_RetCode = 0) 
BEGIN
EXEC @LS_Add_RetCode2 = master.dbo.sp_add_log_shipping_secondary_database 
 @secondary_database = N'Inventory' 
 ,@primary_server = N'Manzoor' 
 ,@primary_database = N'Inventory' 
 ,@restore_delay = 0 
 ,@restore_mode = 1 
 ,@disconnect_users = 0 
 ,@restore_threshold = 45 
 ,@threshold_alert_enabled = 1 
 ,@history_retention_period = 5760 
 ,@overwrite = 1
END
IF (@@error = 0 AND @LS_Add_RetCode = 0) 
BEGIN
EXEC msdb.dbo.sp_update_job 
 @job_id = @LS_Secondary__CopyJobId 
 ,@enabled = 1
EXEC msdb.dbo.sp_update_job 
 @job_id = @LS_Secondary__RestoreJobId 
 ,@enabled = 1
END
-- ****** End: Script to be run at Secondary: [MANZOOR\SQL2] ******

CLICK HERE to watch live practical.

Reference: Manzoor Siddiqui [www.SQLServerLog.com]




LOG Shipping



In my previous post, I am giving a short note related to Log Shipping. Some of my reader asks me to write some details related to it. So, in this article I am trying to summarize the concept of Log Shipping and with this I provide hands on configurations related to it. Hope you enjoy this article.

What is that?

The log shipping is the process of automating the backup of database and transaction log file from one server and restoring them to another server. The Enterprise Edition of SQL Server Only supports the Log Shipping.

Benefits of Log Shipping

It gives us the disaster recovery solutions for a single primary database and one or more secondary database. Each of that is the separate instance of the SQL server.
In the interval between restore jobs the second database supports read-only access.
Allows user-specified delay between when the primary server backup the log of the primary database and when secondary server must restore the log backup.

Terminology

Before starting the log shipping we have to understand some terminology mentioned bellow.

Primary Server

The instance of the SQL Server that is your production server.

Primary Database

The database of the primary server that you want to backup to another server.

Secondary Server

The standby copy of the primary database. The second database may be in either RECOVERING state or the STANDBY state, which leaves the database available for limited read-only access.

Monitor Server

An optional instance of the SQL server, that tracks all of the details of log shipping.

Backup Job

The SQL Server Agent job that performs the backup operations.

Copy Job

A SQL server agent job that copies the backup file from primary server to destination on the secondary server and log history on secondary server and monitor server.

Restore Job

A SQL Server Agent job that restore the copied backup file to the secondary database. It logs history on the local server and monitor server, and deletes old files and old history information.

Alert Job

A SQL Server Agent jobs that raises alerts from primary and secondary database when backup and restore operations does not complete successfully within as specified threshold.

Log Shipping Operations

It consists of three operations

1.     Backup the transaction log at the primary server instance.

2.     Copy the transactions files into secondary server instance (may be multiple).

3.    Restore the log backup on the secondary server instance (may be multiple).

The following figure illustrates the log shipping configurations.




The deployment process of Log shipping is mentioned bellow

We can enable the log shipping by these simple following steps.

1.    First choose the Server for Primary, Secondary and Optional Monitor server.

2.    SQL Server 2008 and letter version supports the Backup compressions. When configuring the Log Shipping configuration we can control he backup compression behavior.

3.    Create file share for the Transaction log backup. We have to prepare a server that is not the part of log shipping configurations. To maximize the availability  of the primary server MS recommends that the we must put the backup share on the separate host computer.

4.    Choose the Backup schedule for the Primary database Server.

5.    Create the Folder of the secondary server into which the transaction log backup will be copied.

6.    Configure the One or More Secondary server and Secondary Database.

7.    Optionally configure the monitor server.

8.    You must have the sysadmin on each server instance to enable the log shipping.

Steps to configure Log Shipping

1.    The database must be Full or Bulk-logged recovery model.SELECT name, recovery_model_desc
FROM sys.databases WHERE name = 'My_DB'

USE [master]
GO
ALTER DATABASE [My_DB] SET RECOVERY FULL WITH NO_WAIT
GO
2.    In the SQL server management studio right click the selected database and select the properties. Then select the Transaction Log Shipping page. Then check the "Enable this as primary database in a log shipping configurations".


3.    The next is to configure and schedule a transaction log backup by clicking "Backup Settings…"
If you are creating backups on a network share enter the network path or for the local machine you can specify the local folder path. The backup compression feature was introduced in SQL Server 2008 Enterprise edition. While configuring log shipping, we can control the backup compression behavior of log backups by specifying the compression option. When this step is completed it will create the backup job on the Primary Server.
4.     In this step we will configure the secondary instance and database. Click on the Add… button to configure the Secondary Server instance and database. You can add multiple servers if you want to setup one to many server log-shipping.


Initialize Secondary Database tab

In this step you can specify how to create the data on the secondary server. We have three options: create a backup and restore it, use an existing backup and restore or do nothing because you have manually restored the database and have put it into the correct state to receive additional backups.




Copy Files Tab

In this tab you have to specify the path of the Destination Shared Folder where the Log Shipping Copy job will copy the T-Log backup files. This step will create the Copy job on the secondary server.



Restore Transaction Log Tab

Here we have to specify the database restoring state information and restore schedule. This will create the restore job on the secondary server.



5.     In this step we will configure Log Shipping Monitoring which will notify us in case of any failure. Please note Log Shipping monitoring configuration is optional. Click on Settings… button which will take you to the "Log Shipping Monitor Settings" screen. Click on Connect …button to setup a monitor server. Monitoring can be done from the source server, target server or a separate SQL Server instance. We can configure alerts on source / destination server if respective jobs fail. Lastly we can also configure how long job history records are retained in the MSDB database. Please note that you cannot add a monitor instance once log shipping is configured.

6.    Click OK to finish it.



Hope you like it.


Detailed Description about Log File in Sql Server

Hi,

In this post i try to discuss about the log file architecture in sql server.

In Log File each and every transaction will be recorded with an unique number called LSN(Log Sequence Number) Number.

Log file will be splitted into multiple parts called VLF(Virtual Log File)

1.Active VLF
2.Recoverable VLF
3.Reusable VLF
4.Un used VLF

Active VLF : In a VLF transactions are waiting to be deliver some other server due to log shipping or replication is called active VLF.

Recoverable VLF : Once transactions are deliverd to other server then that VLF become a Recoverable VLF.

Reusable VLF : Once we take the log backup then that VLF become a Reusable VLF.

Un Used VLF : Till now we never used that VLF that vlf is called Un Used VLF.



http://sqldbahut.blogspot.in/2015/01/very-use-full-video-log-shipping-part-1.html
http://sqldbahut.blogspot.in/2015/01/very-use-full-log-shipping-video-part-2.html