Friday, August 16, 2013

SQL Server Query to Find Last Successful SQL Server Agent Job Run Status, Data and Time

Introduction


Sometime it can be very difficult for one to find out the sequence of SQL Server Agent Job execution when there are multiple jobs running on the same server. In such scenarios the below TSQL script will come in handy. The script displays last successful SQL Server Agent job run status, data and time.

Execute the below TSQL script to find the Last Successful SQL Server Agent Job Run Status, Date and Time

Use msdb
GO

SELECT
    SJ.NAME AS [Job Name]
    ,RUN_STATUS AS [Run Status]
    ,MAX(DBO.AGENT_DATETIME(RUN_DATE, RUN_TIME)) AS [Last Time Job Ran On]
FROM
    dbo.SYSJOBS SJ
        LEFT OUTER JOIN dbo.SYSJOBHISTORY JH
    ON SJ.job_id = JH.job_id
        WHERE JH.step_id = 0
            AND jh.run_status = 1
                GROUP BY SJ.name, JH.run_status
                    ORDER BY [Last Time Job Ran On] DESC
GO
In the below snippet you could see the run status and the last successfully execution data and time of each SQL Server Agent Job.

Find Last Successful SQL Server Agent Job Run Status, Data and Time 

Read more: http://www.mytechmantra.com/LearnSQLServer/SQL-Server-Query-to-Find-Last-Successful-SQL-Server-Agent-Job-Run-Status-Data-and-Time/#ixzz3h4ekvQx9
Follow us: @MyTechMantra on Twitter | MyTechMantra on Facebook

Restore SQL Server System Database master.mdf without Backup

Problem:

One of the biggest mistakes that most of the DBAs do is backing up their user database but not the system databases. May be, they cannot correctly predict the importance of system database or the smooth functionality of SQL Server never made them confront the need to backup this database. For any excusable reason, if backup for system database is not maintained, here is a tip shared that will help to get back the master database back even if no backup is available.
Solution:
First of all, why master database is so important that it must be backed up. This database stores all system–level information of the server. For example: The login info, the servers linked together, and other server level objects are stored in this database. This database can get into an inconsistent mode due to varied reasons that include hardware or software related issues which may restrict the server to get up and render its services.
In the later sections, I will be covering solution to restoring a damaged master database via following mediums:
#: Backup of master database is available and it has to be restored following the RTO challenge. The restoration of this DB is considered risky because it comprises of information about other databases and is a crucial component for starting up the database.
To get started, it is important that SQL Server single-user mode is activated. For that, open ‘SQL Server Configuration Manager’ and click on ‘Startup Parameters’ tab. In the text box for startup parameter, enter “-m” flag and click on ‘Add’ button. Click Apply and restart SQL Server.

To get the single user mode activated, the services has to be stopped and restarted again.


Now, for restoring the database through SQL Server Management Studio, the process is simple. Select the system database that has to be restored, right click on it, choose ‘Tasks’>> ‘Restore’>> ‘Files and Filegroups…’.

You will be presented with two different pages: “General” and “Options”. Select them and make relevant modifications in settings for restoration.




#Rebuild Master Database and Restore it from Backup.If a workable master database is not available, it has to be first created. If a master database is already available, then the scenario and solution shared above will work. There are two methods that can be adopted for rebuilding the database.
1)    Use the Setup for Re-Creating the Master Database
In the process of rebuilding the master database, it is important to know that the resultant will have three databases created:master, model and msdb. For SQL Server 2012, there is no need for setup DVD/ISO. Run the following command from the bootstrap directory (C:\Program Files\Microsoft SQL Server\<SQL Version>\Setup Bootstrap\<Release>).

Syntax Details:


The values for these parameters should be replaced accordingly. Before running the command, make sure that a healthy copy of model and msdb database is saved as safe location.

Now when the master database is in a workable state, it can be recovered from the available backup. For this, it is important that all related services should be stopped and the empty msdb and model DB is replaced with the copy that is saved at safe location.

2)    Use the Template Master Database
In SQL Server 2012, the system database recovery criteria is a little different. There is template database that gets created at the time of installation. When the setup.exe is run for rebuilding the database, the template database is replaced over the database and transaction logs.
However, in case the msdb and model database are in working state, then it is a simpler job to just copy the template files to master database in spite of going for the long procedure of rebuilding the database and then restoring the msdb and model DB from backup. The template files are located at following location:
If master database backup is available, then restoration instruction can be checked out in section 1. But if there is no backup available, then section 3 will give an idea to deal with the problem.
#The master database is unusable and there is no backup available. In this situation, what can be done is the master database can be reconstructed. For this, first of all connect to Server using SQL Server Management Studio and the admin account that was used while installation process or while rebuilding (in the above procedure). Here, you will observe that on expansion that databases tree is blank. This is because of the reason that master database stores the location info of all DBs which is now lost.
But it has to be understood that the database is still available, they are to be relocated. If the backup of the databases is there, then the job is easy, but if it is not, then the location of the database has to be recorded and attached manually. For this, the following T-SQL code can be used:

Note: It is necessary to change the database ownership as they must be owned by the account that has re-attached them.

Nevertheless, now a days, the idea of restoring system backups have taken a great turn in the way that the process has been simplified to a great extent. MDF file recovery programs for getting back the data from system databases have made restoration without backup very easy. The plus point is, for recovery of single database, others are not supposed to be involved like in the manual procedures. Most of the DBAs depend upon manual fixing in normal scenarios, but yes, these tools have made an impressive entry to deal with major issues in less RTO.
System Objects Restoration
On rebuilding the master database or on restoring it from the template, it is possible that some of the information like linked databases, login details etc. are lost. To get some of the objects back, you can take help of SSMS to create scripts. Although you won’t get the objects with details in exact match, but there is a fair possibility of getting very close to it.

Extended Events

Extended Events in SQL Server 2012

Hi friends, today we will discuss about new performance monitoring system Extended Events which gets introduced in version SQL Server 2012.In previous versions we were using SQL Trace for capturing events and SQL Profiler for managing and creating SQL trace.  SQL Server profile, SQL Trace deprecated in future version of SQL version so you guys start using Extended events in near future which is more lightweight.
In this blog we will create an extended event session following below steps:
  1. Open SSMS and connect to your SQL Server instance.
  2. Select database name from available database list. I have selected TSQL2012 database.
  3. Write following query in query window :
  1. In SSMS, expand Management folder Right click on the session folder and Select New Session Wizard.
ExtendedEvent_15. On Introduction page, read the information and click Next.
ExtendedEvent_2
6. On Set Session Properties page, name the session ‘TestEvent’. Click Next.
ExtendedEvent_3
7. On Choose Template page, Select Do Not Use a Template and click Next.
ExtendedEvent_4
8. On Select Events To Capture page, type string sql in Event Library textbox and select sql_statement_completed. Move it to Selected events box and click Next.
ExtendedEvent_5
9. On Capture Global Field page, select the fields which you want to capture in this session. Click Next. I have selected few fields like CPU_id, database_id, database_name.
ExtendedEvent_6
10. On Set Session Event Filter page, create a filter to limit the data you want to capture. I have created filter for database name and sql text. In Field tab select value sqlserver.database_nameand Value to be equal to ‘TSQL2012’ with operator value ‘=’. Also, create another filter with Field value sqlserver.sql_text and Value equals to ‘SELECT PP.Productid%’ with operator value ‘like_i_sql_unicode_string’. Click Next.
ExtendedEvent_7
11. On Specify Session Data Storage page specify the way you want to collect the data for analysis.Select option Work with only the most recent data (ring_buffer_target) and click Next.
ExtendedEvent_8
12. On Summary page, verify your selections that you made for this event session. Click Finish.
ExtendedEvent_9
13. On last page of Create Event Session page, select both checkboxes: Start the event session immediately after session creation and Watch live data on screen as it is captured. Click Close.
ExtendedEvent_10
14. Extended Events Live Window gets opened in new window.
15. You can check the new event session in the session folder. Right click on the newly created session ‘TestEvent’.
16. Execute the above query in query window and see event information in Live Data Window.
ExtendedEvent_12
Extended Events is a more lightweight monitoring system in comparison to SQL Trace. So friends start making use of extended events as SQL Trace will get deprecated in future versions.
That’s all folks for the day. Hope you like this post.

Locking in SQL Server

Locking hints SQL Server

Locking hints are used with SELECT/INSERTS/UPDATE/DELETE statements to enforce or change the default locking behavior.  Given below are some of the locking hints available in SQL Server
  1. ROWLOCK
The ROWLOCK hint tells query optimizer to lock rows (KEY level locks) instead of taking PAGE or TABLE level locks when reading or modifying data. A row lock is the lowest level of lock possible in SQL Server. One or more specific rows are locked and the others are available for concurrent sessions.
  1. PAGLOCK
The PAGLOCK hint tells the query optimizer to take page level locks. A 8 kb page is locked instead of a table.
  1. TABLOCK
It locks the complete table until the end of statement
  1. DBLOCK
It locks the entire database until the end of statement.
  1. UPDLOCK
With UPDLOCK query hint the select statement takes update locks until the end of transaction or statement. The default lock granularity level is ROWLOCK. An exclusive lock is taken if UPDLOCK is combined with TABLOCK or a table lock is taken for some other reason.
  1. XLOCK
WITH XLOCK query hint the select statement takes exclusive locks until the end of a transaction. The default lock granularity level is ROWLOCK, if no granularity level is specified.
  1. HOLDLOCK
With Holdlock query hint the locks are held until end of transaction.
  1. NOLOCK
With NOLOCK query hint no locks are taken. It renders read uncommitted isolation behavior and is applicable to SELECT statements only.



NOLOCK is the SQL hint

NOLOCK is the SQL hint that allows SQL Server to ignore the normal lock that placed and held for transaction and allow the query to complete without having wait.
Take a simple example:
In normal condition of SQL server User –A proceed a transaction for Table-A. Now the User-B tries to access the Table for reading data (Like SELECT statements). In normal conditions of the SQL server it is NOT possible. The User –A must complete the transaction by Committing or Roll backing it before User—B Access the Table for Reading.

Think about the situation, where User—A takes 15 to 20 minutes to complete the transaction and User—B don't have enough time to wait. In such condition we used NOLOCK query hint.
However you need to be very carefully with using NOLOCK. Remember you can get some records that were partially updated or inserted. It is safe to use NOLOCK on rows that you know are not changing right now. For example, records that belong to some user and he is running reports, but not updates, however some users can do updates / inserts at the same time.
 The syntax is mentioned bellow:
SELECT fname, lname
FROM   student_master WITH(NOLOCK)
WHERE  sroll=123





Locking Or Versioning ?

We are looking at database related to concurrent use and how it provides high performance and throughout put with better data security.

This collection give u a good idea related to it.
In RDBMS systems there are mainly to different ways of solving this issue:
  • Data locking
  • Versioning
Data locking
Locking of data has been SQL Server's preferred way of handling concurrency. While, for example, an update statement executes against some data, that data is being locked and no one else can read it (based upon isolation levels). Then, dependent upon how much data that is affected, SQL Server can decide to use a row-lock, page lock or table lock.
Above, I mentioned isolation levels. The isolation level determines how long a read lock is being held (in SQL Server a write lock is always being held to the end of the transaction). The default isolation level in SQL Server is Read Committed: a read operation can only read committed data. If data is being updated while you read, that data is being locked and you won't be able to view the data until the transaction that updates the data has committed (or rolled back).
BEGIN
            BEGIN TRY
                        BEGIN TRAN
            <.....>
                        COMMIT TRAN
            END TRY
            BEGIN CATCH
                    <.....>
                ROLLBACK TRAN   
            END CATCH
END
From a concurrency standpoint this is very good, you are ensured that you can only read correct data. From a throughput perspective it is not that good because your read operation won't return until the write locks has been released.
In order to enhance the throughput other database vendors are using versioning instead of locking.
Data Versioning
In a system using versioning, data is not being locked but instead read operations happen against an older version of the data being manipulated. It is important to notice that the read operation happens against an earlier committed version of the data, we are not reading uncommitted data. Dependent upon isolation levels, the read operation then either read the latest committed data or the data as it was when the read operation started.
Using versioning seems like the ideal solution; we can always read data (no locking), and we are always reading committed data! Yes, that is absolutely true. However, versioning doesn't come without a cost. First of all, when an update operation takes place against the data, the data being touched is copied to a separate storage area. This incurs a performance penalty, but also be aware of the impact of volume of data being copied; ie, do not use versioning if you are doing large batch updates. Read operations will also incur a performance penalty as you potentially have to read through different versions of the data.
For developers used to database servers using versioning, SQL Server's locking may seem like a step back. In SQL Server 2005 therefore, Microsoft has included the ability to use versioning in addition to locking!
Versioning in SQL Server 2005 is implemented as two new isolation levels:
  • Read Committed Snapshot (READ_COMMITTED_SNAPSHOT)
  • Transaction Isolation Level Snapshot (ALLOW_ISOLATION_LEVEL_SNAPSHOT)
The difference between these two isolation levels is how reads are handled inside the same transaction. For example, if you have a piece of T-SQL code looking like so:


BEGIN TRAN
SELECT * FROM authors WHERE au_id = '111-111-1111';
--do some other work Update Work
--now
 SELECT * FROM authors 
 SELECT * FROM authors WHERE au_id = '111-111-1111';
COMMIT TRAN

...and in between the first and second read from authors, the record we're reading have been updated, the behavior would be different.

 For read committed snapshot you would see the last committed value,
Whereas for transaction isolation level snapshot the value would be as it was for the first read.
Having seen an example of this, the question is how do we enable these new isolation levels (they are no on by default)?
 You are enabling them on a database by database level, and you do it through the ALTER DATABASE syntax.
Read committed snapshot is enabled like so:

use master;
go
ALTER DATABASE pubs
SET READ_COMMITTED_SNAPSHOT ON
Enabling read commited snapshot replaces the default read committed isolation level with read committed snapshot, so in order to use it you do not have to do anything in particular after having enabled it.
Transaction isolation level snapshot is enabled like this:
use master;
go
ALTER DATABASE pubs
SET ALLOW_ISOLATION_LEVEL_SNAPSHOT ON
As opposed to read committed snapshot transaction isolation level snapshot has to be explicitly enabled for the session or statement where you want to use it. The following code snippet shows how to enable it:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRAN
SELECT * FROM authors WHERE au_id = '111-111-1111';
--do some other work --now select from authors again select * from authors where au_id = '111-111-1111';
COMMIT TRAN
Versioning in SQL Server 2005 gives the application developer new means to create applications with great throughput for read operations. It also gives SQL Server the same capabilities that competing database systems have had for quite a while.
Be aware, however, that versioning doesn't come without a cost. When enabling versioning in a database and, for example, updating records, the whole record will be copied to TempDb (which is where the version store is) plus an additional 14 bytes. In addition to increased size of TempDb read operations will also be slower, as it has to read against TempDb and potentially through quite a few versions unti it finds the version of the record it wants.

Detach or take offline in SQL Server

Detach or take offline both causes a database to be inaccessible to users. The difference is that detach deletes database metadata from SQL Server i.e. database file information, status information and all the other details that we see in sys.databases view. On the other hand taking database offline retains database metadata in SQL server system views.
Let’s take a database offline.
The above query takes database AdventureWorks2014 offline.
1_Detach or take offline in SQL Server
If we query sys.databases view the database information can be seen as shown in above snapshot. Also, the object explorer shows database as offline as shown below.
2_Detach or take offline in SQL Server
The database can be brought online by executing below query.
Let’s now detach a database.
The above query first drops database connections by setting it to single user mode and then executes sp_detach_db to detach it. Let’s query the sys.databases view for the detached database.
3_Detach or take offline in SQL Server
The database details aren’t maintained when database is detached. The data files can be copied or accessed when a database is detached.
Let’s now execute the below query to attach

Regards
Ahmad Osama