Locking hints are used with SELECT/INSERTS/UPDATE/DELETE statements to enforce or change the default locking behavior.  Given below are some of the locking hints available in SQL Server
  1. ROWLOCK
The ROWLOCK hint tells query optimizer to lock rows (KEY level locks) instead of taking PAGE or TABLE level locks when reading or modifying data. A row lock is the lowest level of lock possible in SQL Server. One or more specific rows are locked and the others are available for concurrent sessions.
  1. PAGLOCK
The PAGLOCK hint tells the query optimizer to take page level locks. A 8 kb page is locked instead of a table.
  1. TABLOCK
It locks the complete table until the end of statement
  1. DBLOCK
It locks the entire database until the end of statement.
  1. UPDLOCK
With UPDLOCK query hint the select statement takes update locks until the end of transaction or statement. The default lock granularity level is ROWLOCK. An exclusive lock is taken if UPDLOCK is combined with TABLOCK or a table lock is taken for some other reason.
  1. XLOCK
WITH XLOCK query hint the select statement takes exclusive locks until the end of a transaction. The default lock granularity level is ROWLOCK, if no granularity level is specified.
  1. HOLDLOCK
With Holdlock query hint the locks are held until end of transaction.
  1. NOLOCK
With NOLOCK query hint no locks are taken. It renders read uncommitted isolation behavior and is applicable to SELECT statements only.



NOLOCK is the SQL hint

NOLOCK is the SQL hint that allows SQL Server to ignore the normal lock that placed and held for transaction and allow the query to complete without having wait.
Take a simple example:
In normal condition of SQL server User –A proceed a transaction for Table-A. Now the User-B tries to access the Table for reading data (Like SELECT statements). In normal conditions of the SQL server it is NOT possible. The User –A must complete the transaction by Committing or Roll backing it before User—B Access the Table for Reading.

Think about the situation, where User—A takes 15 to 20 minutes to complete the transaction and User—B don't have enough time to wait. In such condition we used NOLOCK query hint.
However you need to be very carefully with using NOLOCK. Remember you can get some records that were partially updated or inserted. It is safe to use NOLOCK on rows that you know are not changing right now. For example, records that belong to some user and he is running reports, but not updates, however some users can do updates / inserts at the same time.
 The syntax is mentioned bellow:
SELECT fname, lname
FROM   student_master WITH(NOLOCK)
WHERE  sroll=123





Locking Or Versioning ?

We are looking at database related to concurrent use and how it provides high performance and throughout put with better data security.

This collection give u a good idea related to it.
In RDBMS systems there are mainly to different ways of solving this issue:
  • Data locking
  • Versioning
Data locking
Locking of data has been SQL Server's preferred way of handling concurrency. While, for example, an update statement executes against some data, that data is being locked and no one else can read it (based upon isolation levels). Then, dependent upon how much data that is affected, SQL Server can decide to use a row-lock, page lock or table lock.
Above, I mentioned isolation levels. The isolation level determines how long a read lock is being held (in SQL Server a write lock is always being held to the end of the transaction). The default isolation level in SQL Server is Read Committed: a read operation can only read committed data. If data is being updated while you read, that data is being locked and you won't be able to view the data until the transaction that updates the data has committed (or rolled back).
BEGIN
            BEGIN TRY
                        BEGIN TRAN
            <.....>
                        COMMIT TRAN
            END TRY
            BEGIN CATCH
                    <.....>
                ROLLBACK TRAN   
            END CATCH
END
From a concurrency standpoint this is very good, you are ensured that you can only read correct data. From a throughput perspective it is not that good because your read operation won't return until the write locks has been released.
In order to enhance the throughput other database vendors are using versioning instead of locking.
Data Versioning
In a system using versioning, data is not being locked but instead read operations happen against an older version of the data being manipulated. It is important to notice that the read operation happens against an earlier committed version of the data, we are not reading uncommitted data. Dependent upon isolation levels, the read operation then either read the latest committed data or the data as it was when the read operation started.
Using versioning seems like the ideal solution; we can always read data (no locking), and we are always reading committed data! Yes, that is absolutely true. However, versioning doesn't come without a cost. First of all, when an update operation takes place against the data, the data being touched is copied to a separate storage area. This incurs a performance penalty, but also be aware of the impact of volume of data being copied; ie, do not use versioning if you are doing large batch updates. Read operations will also incur a performance penalty as you potentially have to read through different versions of the data.
For developers used to database servers using versioning, SQL Server's locking may seem like a step back. In SQL Server 2005 therefore, Microsoft has included the ability to use versioning in addition to locking!
Versioning in SQL Server 2005 is implemented as two new isolation levels:
  • Read Committed Snapshot (READ_COMMITTED_SNAPSHOT)
  • Transaction Isolation Level Snapshot (ALLOW_ISOLATION_LEVEL_SNAPSHOT)
The difference between these two isolation levels is how reads are handled inside the same transaction. For example, if you have a piece of T-SQL code looking like so:


BEGIN TRAN
SELECT * FROM authors WHERE au_id = '111-111-1111';
--do some other work Update Work
--now
 SELECT * FROM authors 
 SELECT * FROM authors WHERE au_id = '111-111-1111';
COMMIT TRAN

...and in between the first and second read from authors, the record we're reading have been updated, the behavior would be different.

 For read committed snapshot you would see the last committed value,
Whereas for transaction isolation level snapshot the value would be as it was for the first read.
Having seen an example of this, the question is how do we enable these new isolation levels (they are no on by default)?
 You are enabling them on a database by database level, and you do it through the ALTER DATABASE syntax.
Read committed snapshot is enabled like so:

use master;
go
ALTER DATABASE pubs
SET READ_COMMITTED_SNAPSHOT ON
Enabling read commited snapshot replaces the default read committed isolation level with read committed snapshot, so in order to use it you do not have to do anything in particular after having enabled it.
Transaction isolation level snapshot is enabled like this:
use master;
go
ALTER DATABASE pubs
SET ALLOW_ISOLATION_LEVEL_SNAPSHOT ON
As opposed to read committed snapshot transaction isolation level snapshot has to be explicitly enabled for the session or statement where you want to use it. The following code snippet shows how to enable it:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRAN
SELECT * FROM authors WHERE au_id = '111-111-1111';
--do some other work --now select from authors again select * from authors where au_id = '111-111-1111';
COMMIT TRAN
Versioning in SQL Server 2005 gives the application developer new means to create applications with great throughput for read operations. It also gives SQL Server the same capabilities that competing database systems have had for quite a while.
Be aware, however, that versioning doesn't come without a cost. When enabling versioning in a database and, for example, updating records, the whole record will be copied to TempDb (which is where the version store is) plus an additional 14 bytes. In addition to increased size of TempDb read operations will also be slower, as it has to read against TempDb and potentially through quite a few versions unti it finds the version of the record it wants.