Wednesday, November 30, 2011

2 How to check fulltext search feature is installed on the SQL Server.

From SQL Server 2008, the fulltext index is always enable on the user database and can not be disable. However, to make it effective, we must create the fulltext index.

Also, the MSSQLFDLauncher service must be installed on the host. The service is instance specific. The actual executable is fdlauncher.exe.

We can user SERVERPROPERTY to determine whether the full text search feature is install for the SQL Server.

  1. select SERVERPROPERTY('IsFullTextInstalled')


3 TVF and Apply

TVF full name is User-defined functions that return a table data. It is also alternative of the view. While views are limited to a single SELECT statement,TVF can contain additional statements that allow more powerful logic than is possible in views.

Create View sample Code

  1. use AdventureWorks;
  3. CREATE VIEW vSalesCount(ProductID,CountProduct)
  4. AS
  5. SELECT ProductID,COUNT(ProductID) FROM sales.SalesOrderDetail GROUP BY ProductID having ProductID > 800
  6. GO

Table Value Function

  1. use AdventureWorks;
  2. IF OBJECT_ID(N'dbo.ufnGetSalesCount', N'TF') IS NOT NULL
  3.     DROP FUNCTION dbo.ufnGetSalesCount;
  4. GO
  5. CREATE FUNCTION dbo.ufnGetSalesCount(@checkProductID int)
  7. AS
  8. RETURN (
  9.        SELECT ProductID,COUNT(ProductID) AS NumCount FROM sales.SalesOrderDetail
  10.        GROUP BY ProductID having ProductID > @checkProductID
  11.     );
  12. GO

Tuesday, November 29, 2011



We can use the tablesample to limit the return result sets. The difference using tablesample and top is top would always retrieve top N rows but tablesample would get the sampling of the data.

TABLESAMPLE would go to each 8K page and get the sample data. That being said if the table has 20 pages but only one page has all table’s data. It is possible it wont get the data back.

  1. USE AdventureWorks;
  2. SELECT *
  3. FROM person.address


  1. USE AdventureWorks;
  2. SELECT *
  3. FROM person.address


Limiting Result Sets by Using TABLESAMPLE

1 CTE, VIEW and Derived Table


View is the database object in the database. Once the view is being created, it can be access by different sessions.

  1. use AdventureWorks;
  3. CREATE VIEW vSalesCount(ProductID,CountProduct)
  4. AS
  5. SELECT ProductID,COUNT(ProductID) FROM sales.SalesOrderDetail GROUP BY ProductID
  6. GO
  7. SELECT * FROM vSalesCount

It is also doable to create the index on the view to help out the performance.


Monday, November 28, 2011

1 Something about the insert

OUTPUT syntax

Create base table for testing

  1. USE AdventureWorks;
  3. (SID int IDENTITY, EName varchar(256), ModifiedDate datetime)
  4. GO

Use INSERTED to retrieve newly insert data. suser_name() is used to return who is running the insert.

  1. INSERT INTO tOUTPUT(EName, ModifiedDate)
  2.     OUTPUT INSERTED.SID, suser_name(), INSERTED.ModifiedDate
  3. VALUES('Jack', getdate())


Sunday, November 27, 2011

1 Web UI Automation using PowerShell

Recently, I come across the need for simulate the user browser web behavior. I need to simulate the user behavior by going to particular URL, filling the form and submit the request and even download the file.

There defiantly will be more then one way can do it. I intend to focus on the solution which can use the language I familiar with: Perl and PowerShell.

Perl: LWP::UserAgent seems to be the way to go. This seems to be able to send the HTTP POST request. During my very short period time of testing. It does not work for me. Probably is because I do not have the proxy server set up. ( My environment has some proxy server/firewall restriction).

1 Resource Database & System View

Since SQL Server 2005 release, MSFT has discontinue the traditional system tables such as sysdatabase which original inherit  from Sybase. It is being replace by the System view. The document here illustrate the map between the old style system table and new system view.

Along with the system view changes, Microsoft also said, there will be a new hidden database being ship with the SQL Server. Of course, for some of us, this statement is not enough to satisfy us. We like to poke around to see what it is. The initial default databases upon SQL server installation are Master, msdb, tempdb, model and resource database.

Resource database is used to store the system objects. Whenever, SQL server upgrade, apply fix pack, service pack, it will upgrade the resource database. Some says, we can roll back the fix pack by replacing the resource database file using the older fix pack upgrade. However, I personally have not try it it myself and not found much more information from internet if anyone have try it.

This resource database can not be view from SSMS, sp_helpdb and even the sys.databases has been explicitly design to exclude this. 


There are few ways we can get our hands on it.

2 How to fix the 262 show plan permission deny error

One of common technical to trouble shooting the permission issue is looking up the query plan.  As a system DBA, I may want to give my developer permission to check the query plan instead of doing every thing for them. If they do not possess proper permission, they may get the 262 error.


The simple grant statement can fix the issue.


Saturday, November 19, 2011

1 Endpoints

After SQL Server installation, MSSQL create below 5 endpoints by default.

  • Dedicated Admin Connection
  • TSQL Local Machine:  Share memory
  • TSQL Named Pipes: Name Pipes
  • TSQL Default TCP: TCP/IP
  • TSQL Default VIA: VIA

These endpoints is bases on TDS (Tabular Data Stream). It is an application layer protocol, used to transfer data between a database server and a client. It was initially designed and developed by Sybase. When MSFT adapt the Sybase server code. It is been port over to MS SQL Server. Since then MSFT has made many changes on it.  You can also see other open source implementation base on the TDS spec such as FreeTDS and jTDS.

1 How to solve Error: 18456, Severity: 14, State: 12. Login-based server access validation failed with an infrastructure error. Check for previous errors

Recently, I have encounter this login failure for my sql server account.


Friday, November 18, 2011

1 Transparent Data Encryption (TDE)

TDE provide the encryption on the database and log file itself. Even other get hands on your database file or backup file. Without the key, they still can not get the information store in the database.

Upon enable the TDE, the SQL server engine performs real-time I/O encryption and decryption of the data and log files. The application does not require changes.

Thursday, November 17, 2011

1 Migration Login between SQL Servers

MSFT does not ship the store procedure to move the logins from one SQL Server instance to another. However, the steps has been describe in the KB 918992 here

The store procedure is call sp_help_revlogin.

The steps describes in the KB article will create 2 store procedure. sp_hexadecimal and sp_help_revlogin.

sp_hexadecimal is used to convert numeric value to hexadecimal. It is called inside sp_help_revlogin.

1 Find out Backup & Restore History from MSDB

We can find out Backup & Restore History from below 2 tables in MSDB.

  • backupset: Backup History
  • restorehistory: Restore History.

Below Query list all the database backup history

  1. SELECT, b.type, b.backup_finish_date AS LastSuccessfulBackup,b.backup_finish_date
  2. FROM master..sysdatabases a
  3. LEFT OUTER JOIN msdb..backupset b ON = b.database_name
  4. ORDER BY, b.type

1 Transaction log space

  • log file always use sequentially , therefore put the multiple log file in the different disk drive would not improve performance or WRITE_LOG wait issue.
  • log file must be zero out first and can not use instant initial file .Therefore shrink log file too often is not going to be good because the log file will grow again.
  • Backup log would ONLY release the free VLF


How big the log file should be

As big as it needs to be. Here are some guild line:

  • Single largest transaction
  • Largest index or index rebuild

Due to the number of VLF could affect the performance, we should set up the initial log size as reasonable size and set the extend size in the fix number so each VLF would be in the difference size.

For example, set the initial size as 4GB ( 256MB per VLF) and extend another 4GB after that .


  • chunks less than 64MB and up to 64MB = 4 VLFs
  • chunks larger than 64MB and up to 1GB = 8 VLFs
  • chunks larger than 1GB = 16 VLFs


Check log space usage

To see the log file space, we can use DBCC SQLPERF. example as below



Or you can use below query to check the space usage for single database

Tuesday, November 15, 2011

1 Database compression

Database compression is the SQL Server 2008 Enterprise ONLY features. MSSQL offers 2 type of the data compression. Row compression and Page compression.

To estimate how much space you can save by running sp_estimate_data_compression_savings. The store procedure takes 5 parameters: Schema, object name, index name, partition name, compression type( Null, Page or row)

Saturday, November 12, 2011

1 Ranking Function: Row Number,Rank,DESN Rank and NTILE


Ranking function syntax are similar for all 4 different type of ranking functions.

  1. [ROW_NUMBER()|RANK() |DENSE_RANK ( )|NTILE()]    OVER ( [ <partition_by_clause> ] < order_by_clause > )

Partition: use for group the result set.

  • Row Number: Returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition.
  • Rank: Returns the rank of each row within the partition of a result set. The rank of a row is one plus the number of ranks that come before the row in question.
  • DENSE_RANK: Similer to Rank but without the gap.
  • NTILE:Distributes the rows in an ordered partition into a specified number of groups. The groups are numbered, starting at one. For each row, NTILE returns the number of the group to which the row belongs.

Friday, November 11, 2011

1 Notes about DDL Trigger

Get the trigger definition.

  1. SELECT,definition
  2. FROM sys.sql_modules sm join sys.triggers st
  3. on sm.object_id = st.object_id
  • RAISEERROR() can write information to the event log.
  • PRINT can output the string to the console

Log the the trigger event to the table

1 Partition Table Example


Prepare the sample database for testing.

In real world, you would like to put the partition in the different file group and drive to get more performance and flexibility.

In the example, I am going to create 2 tables. TransactionHistory and TransactionHistoryArchive. TransactionHistory  will have 13 partitions and TransactionHistoryArchive will have 2 partitions. After the data load to the tables. I am going to move one of the TransactionHistory partition to TransactionHistoryArchive.

Tuesday, November 8, 2011

1 Shrink Tempdb

Tempdb is playing more heavily role since SQL Server 2005. if we did not control or monitor it well, it may grow and eventually will fill up the disk.
Tempdb may not be able to shrink if there is an internal object still use tempdb. Per Kalen Delaney suggests in the “Microsoft SQL Server 2008 Internals”, it is better shrink the individual file instead of shrink the tempdb as a whole. She also suggests in the book that it is better  to use alter database to change the tempdb size and bounce the MSSQL.
  2.    (NAME = 'tempdev', SIZE = target_size_in_MB)
The Alter database wont change the current tempdb size but only change its initial size. The tempdb will be recreated as the initial size after Server bounce.

Monday, November 7, 2011

1 Shrink the database File: DBCC SHRINKFILE

Recently I come across the need to shrink the database file to avoid the file system being filling up.

The first thoughts come to the mind is using DBCC SHRINKFILE

DBCC SHRINKFILE with TRUNCATEONLY option only remove the free pages after the last allocate extend. It does not perform any page moving movement. Hence it is generally no harm and perform very quick.

If that is not enough, we can use DBCC SHRINKFILE(FILENAME,TARGETSIZE).

  1. DBCC SHRINKFILE (AdventureWorks2008R2_Data,300)

This will perform the page movement and it will be slower and take much longer time. This may possible shrink the file smaller then its initialize size as long as there is free space within the file.

To check the free space within the file, we can run

  1. SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB,size/128.0
  2. FROM sys.database_files;
  3. go


This information is different from what you get from sp_spaceused. sp_spaceused is given you the total size combine with log and data. the sys.database file is break down by each database file.


The DBCC SHRINKFILE process can be stop at any time. Any work has been complete will retain.

In many case, you may wonder how long does it take to complete,we can query from sys.dm_exec_requests

  1. SELECT session_id,start_time,status,command,DB_NAME(Database_id),wait_type
  2. ,percent_complete,DATEADD(ms,estimated_completion_time,GETDATE()) AS EstimatedEndTime,
  3. (estimated_completion_time/1000/60) AS EstimatedMinutesToEnd
  4. FROM sys.dm_exec_requests

DBCC Shrinkfile does not retain the index fragmentation and could possible make it worse. It is suggest to check the index fragmentation after you shrink the file and decide whether it is need to rebuild or recognize.

Below query can be used to check the index fragmentation on the given database and given file group.

  1. SELECT i.data_space_id,OBJECT_NAME(dmi.object_id) AS TableName,dmi.index_id, AS IndexName,index_type_desc,alloc_unit_type_desc,avg_fragmentation_in_percent
  2. FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'), NULL, NULL, NULL , NULL) dmi
  3. JOIN  sys.indexes i on dmi.object_id = i.object_id and dmi.index_id = i.index_id

Sunday, November 6, 2011

1 Performance Tuning: DMV 101

DMV store in the memory, it only capture the runtime information and does not keep the historical data. Once the server bounce, the information will lost and start over.

DMV type

  1. dm_db_*: Database info and index
  2. dm_exec_*: runtime activity ,query and query plan
  3. dm_io_*: IO stats
  4. dm_os_*: OS and Hardware information



Evaluate whether it is need to have the index

  1. SELECT *
  2. FROM
  3. (SELECT user_seeks * avg_total_user_cost * (avg_user_impact * 0.01) AS
  4. index_advantage, migs.* FROM sys.dm_db_missing_index_group_stats migs) AS migs_adv
  5. INNER JOIN sys.dm_db_missing_index_groups AS mig ON migs_adv.group_handle = mig.index_group_handle
  6. INNER JOIN sys.dm_db_missing_index_details AS mid ON mig.index_handle = mid.index_handle
  7. ORDER BY migs_adv.index_advantage
  8. GO



To get the current running  spid query plan as XML format

  1. SELECT * FROM sys.dm_exec_requests CROSS APPLY sys.dm_exec_query_plan(plan_handle)
  2.     CROSS APPLY sys.dm_exec_sql_text(sql_handle)

To get the Query plan for past query from cache

  1. SELECT * FROM sys.dm_exec_query_stats CROSS APPLY sys.dm_exec_query_plan(plan_handle)
  2.     CROSS APPLY sys.dm_exec_sql_text(sql_handle)



Get the file size for the giving database

  1. SELECT * FROM sys.dm_io_virtual_file_stats(DB_ID(N'AdventureWorks2008R2'),NULL);

The number return is in the Page, it needs to times 8192 to covert to bytes.

Get the pending IO session

  1. select * from sys.dm_io_pending_io_request



sys.dm_os_windows_info will show the OS information such as service pack. It is only available after 2008 R2 SP1.

Show the current spid wait stats.

  1. select * from sys.dm_os_wait_stats


The wait stats is accumulate since last server bounce or manually clean the state. we can use DBCC SQLPER to clean the wait state


Show the current worker process and how long they have being running

  2.     t1.session_id,
  3.     CONVERT(varchar(10), t1.status) AS status,
  4.     CONVERT(varchar(15), t1.command) AS command,
  5.     CONVERT(varchar(10), t2.state) AS worker_state,
  6.     w_suspended =
  7.       CASE t2.wait_started_ms_ticks
  8.         WHEN 0 THEN 0
  9.         ELSE
  10.           t3.ms_ticks - t2.wait_started_ms_ticks
  11.       END,
  12.     w_runnable =
  13.       CASE t2.wait_resumed_ms_ticks
  14.         WHEN 0 THEN 0
  15.         ELSE
  16.           t3.ms_ticks - t2.wait_resumed_ms_ticks
  17.       END
  18.   FROM sys.dm_exec_requests AS t1
  19.   INNER JOIN sys.dm_os_workers AS t2
  20.     ON t2.task_address = t1.task_address
  21.   CROSS JOIN sys.dm_os_sys_info AS t3
  22.   WHERE t1.scheduler_id IS NOT NULL;


Show the any OS feed back information

  1.   select * from sys.dm_os_ring_buffers


2 Something about the page

SQL Server data page with row offsets

SQL Server use 8K page(8192 Bytes). Only 8060 bytes can be used to store the row.

Each table can either have Clustered index page or Heap page.

Clustered index page id is 1. Heap Page id is 0.


The option can be one of the following

  • non-clustered Index ID
  • 1 = Clustered Index ID
  • 0 =  in-row data pages and in-row IAM pages (from Heap)
  • -1 = all pages of all indexes including LOB (Large object binary) pages and row-overflow pages
  • -2 = all IAM pages


  1. DBCC IND ('AdventureWorks','dbo.AWBuildVersion',1)


IAM INFO is NULL meaning it is the IAM page. Page type 10 also means it is IAM page.

Page type:

  • 1 : data page
  • 2 : index page
  • 3 ,4 - text pages
  • 8 : GAM page
  • 9 : SGAM page
  • 10 : IAM page
  • 11 :PFS page

To exam detail what is inside of the page. Use DBCC PAGE


There are 4 print options:

  • 0:Just page header
  • 1: Page header + per row Hex dump + slot array
  • 2: Page Header + Whole page Hex dump
  • 3: Page Header + Per row data
  1. DBCC TRACEON(3604);
  2. DBCC PAGE ('AdventureWorks',1,719,0)

Option 3 is the most human readable format.  DBCC trace on 3604 is for output the information to the console instead of writing in the error log.



Inside the Storage Engine: Using DBCC PAGE and DBCC IND to find out if page splits ever roll back

Saturday, November 5, 2011

66 Database Mirroring

Database must be  in FULL Recovery mode and can not have the file stream file group.

Database mirroring is the SQL Server Enterprise Edition feature. The Principal and Mirror database must use Enterprise Edition. The Witness server can be Express.

The default port for the SQL Server mirroring endpoint is 5022 but it can be changed to any unused port.

A mirroring session that includes a witness

Below example is setting up the database mirror on the high safety with  automatic failover or called Synchronous Database Mirroring (High-Safety Mode)

  • SG1 is the principal SQL Server host.
  • SG21 is the Mirror SQL Server host.
  • SG13 is the Witness SQL Server host.

Restore Database

Restore Database from principal to Mirror database with non recovery and restore Log from Principal to Mirror database with non recovery

  1. RESTORE DATABASE [AdventureWorks] from DISK = '\\SG11\C$\DUMP\AdventureWorks_20111105_3.db' with replace,norecovery
  2. RESTORE LOG [AdventureWorks] from DISK = '\\SG11\C$\DUMP\AdventureWorks_20111105_3.log' with norecovery

Create the Endpoint on Principal, Mirror and witness server

You will need to change the port number if they are on the same server. However, it is recommend you put in the different server ( for High availability of course Smile )  

Create end point on both Principal and Mirror server

  2.     STATE = STARTED
  3.     AS TCP ( LISTENER_PORT = 7051 )
  5. GO

Create the end point on the witness server

  2.     STATE = STARTED
  3.     AS TCP ( LISTENER_PORT = 7051 )
  5. GO


Setup the Mirroring database

Run below on the Mirroring database server

  1. ALTER DATABASE AdventureWorks
  2.    SET PARTNER = 'TCP://SG11:7051'

Setup the principal and Witness server

Run below on the principal server.

  1. ALTER DATABASE AdventureWorks
  2.    SET PARTNER = 'TCP://SG21:7051'
  3. ALTER DATABASE AdventureWorks
  4.    SET WITNESS = 'TCP://SG13:7051'

The SQL Agent job will be created on the mirroring database server, we can use  msdb.dbo.sp_help_job to review the job detail.

Use below tables to view the Mirror database and endpoint configuration. This can be run on either Principal, mirror and witness server.

  1. use master;
  2. SELECT name, port FROM sys.tcp_endpoints;
  3. select * from sys.database_mirroring_endpoints;
  4. SELECT role_desc, state_desc FROM sys.database_mirroring_endpoints;
  5. select * from sys.database_mirroring_witnesses;
  6. select * from sys.database_mirroring;


Access data on the mirroring database

Since the mirroring database is under the recovery and mirroring state. We can not directly access the data. As alternative, we can create the snapshot and access the snapshot data.

  1. CREATE DATABASE [S_AdventureWorks_20111105]
  2. ON (NAME='AdventureWorks_data' ,FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSG21\MSSQL\DATA\')
  3. AS SNAPSHOT of  [AdventureWorks]

Manual failover

In the Synchronous Database Mirroring (High-Safety Mode), the failover will automatically happen in the event the principal database is not available. If you set the mirror mode as high performance mode, this will require the manually failover and use  FORCE_SERVICE_ALLOW_DATA_LOSS

To manually failover in the High-Safety mode, we can run below on the principal server.

  1. Alter database [AdventureWorks]set partner failover

Once it is done, when we select from sys.database_mirroring, we can see the original principal database will become “Mirror” and the old mirror will become ‘Principal’.

Remove the witness

Anytime during the mirroring, we can disable the witness by running below command.


Remove the database mirroring

  • Remove the mirroring database session

Run on any of the partner server

  • Recover the mirroring database (Optional)

If any snapshots were created, the needs to be drop before we can recover the database.

  • Remove the mirroring monitor agent job (Optional)
  1. msdb.dbo.sp_delete_job @job_name='Database Mirroring Monitor Job'
  • Remove the end point (optional)
  1. DROP ENDPOINT mirror

Database Mirroring Monitor

We can use this to monitor the database mirroring status.



A FILESTREAM filegroup cannot be created on the principal server. Database mirroring cannot be configured for a database that contains FILESTREAM filegroups.


Database Mirroring Overview

1 Log Shipping

Configuration showing backup, copy, & restore jobs

  • Log shipping will be deprecate in the future MSSQL Server release. User should consider to use Database Mirror to replace Log shipping.
  • Log shipping does not support in the SQL Express.
  • The database set up for log shipping must be FULL or BUIL INSERT mode.

Log shipping high level over view

  1. Restore the full db from primary to the secondary db. 2 modes can be choosing on the secondary db. Stand by or  non recovery. When use stand by mode, the secondary db can be serve as reporting purpose however, stand by mode can not be used for high available solution.
  2. Backup the log from primary db
  3. Restore the log to the secondary db

Log shipping will involves 4 jobs

  1. Backup job: backup the log from primary db
  2. Copy job: copy the tran dump file to the share
  3. Restore job: Apply the tran dump to the secondary db
  4. Alert job: only will be created when monitor server is configured.

SSMS can be used to generate the Script.

Setup the log backup jobs

Below example is setting up the log backup job for every 5mins

  1. DECLARE @LS_BackupJobId    AS uniqueidentifier
  2. DECLARE @LS_PrimaryId    AS uniqueidentifier
  3. DECLARE @SP_Add_RetCode    As int
  6. EXEC @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database
  7.         @database = N'AdventureWorks'
  8.         ,@backup_directory = N'C:\ls_backup'
  9.         ,@backup_share = N'\\Sg11\ls_backup'
  10.         ,@backup_job_name = N'LSBackup_AdventureWorks'
  11.         ,@backup_retention_period = 4320
  12.         ,@backup_compression = 2
  13.         ,@backup_threshold = 60
  14.         ,@threshold_alert_enabled = 1
  15.         ,@history_retention_period = 5760
  16.         ,@backup_job_id = @LS_BackupJobId OUTPUT
  17.         ,@primary_id = @LS_PrimaryId OUTPUT
  18.         ,@overwrite = 1
  21. IF (@@ERROR = 0 AND @SP_Add_RetCode = 0)
  22. BEGIN
  24. DECLARE @LS_BackUpScheduleUID    As uniqueidentifier
  25. DECLARE @LS_BackUpScheduleID    AS int
  28. EXEC msdb.dbo.sp_add_schedule
  29.         @schedule_name =N'LSBackupSchedule_sg11\MSSQL2008R21'
  30.         ,@enabled = 1
  31.         ,@freq_type = 4
  32.         ,@freq_interval = 1
  33.         ,@freq_subday_type = 4
  34.         ,@freq_subday_interval = 5
  35.         ,@freq_recurrence_factor = 0
  36.         ,@active_start_date = 20111104
  37.         ,@active_end_date = 99991231
  38.         ,@active_start_time = 0
  39.         ,@active_end_time = 235900
  40.         ,@schedule_uid = @LS_BackUpScheduleUID OUTPUT
  41.         ,@schedule_id = @LS_BackUpScheduleID OUTPUT
  43. EXEC msdb.dbo.sp_attach_schedule
  44.         @job_id = @LS_BackupJobId
  45.         ,@schedule_id = @LS_BackUpScheduleID  
  47. EXEC msdb.dbo.sp_update_job
  48.         @job_id = @LS_BackupJobId
  49.         ,@enabled = 1
  52. END

We can use msdb.dbo.sp_help_job  to review the job detail

Setup the alert job

  1. EXEC master.dbo.sp_add_log_shipping_alert_job

Enable the Log shipping on the primary database

  1. EXEC master.dbo.sp_add_log_shipping_primary_secondary
  2.         @primary_database = N'AdventureWorks'
  3.         ,@secondary_server = N'SG13\SG13MSSQL'
  4.         ,@secondary_database = N'AdventureWorks'
  5.         ,@overwrite = 1

Create the copy and restore job on the secondary database

  2. DECLARE @LS_Secondary__CopyJobId    AS uniqueidentifier
  3. DECLARE @LS_Secondary__RestoreJobId    AS uniqueidentifier
  4. DECLARE @LS_Secondary__SecondaryId    AS uniqueidentifier
  5. DECLARE @LS_Add_RetCode    As int
  8. EXEC @LS_Add_RetCode = master.dbo.sp_add_log_shipping_secondary_primary
  9.         @primary_server = N'sg11\MSSQL2008R2'
  10.         ,@primary_database = N'AdventureWorks'
  11.         ,@backup_source_directory = N'\\Sg11\ls_backup'
  12.         ,@backup_destination_directory = N'C:\LS_COPY'
  13.         ,@copy_job_name = N'LSCopy_sg11\MSSQL2008R2_AdventureWorks'
  14.         ,@restore_job_name = N'LSRestore_sg11\MSSQL2008R2_AdventureWorks'
  15.         ,@file_retention_period = 4320
  16.         ,@overwrite = 1
  17.         ,@copy_job_id = @LS_Secondary__CopyJobId OUTPUT
  18.         ,@restore_job_id = @LS_Secondary__RestoreJobId OUTPUT
  19.         ,@secondary_id = @LS_Secondary__SecondaryId OUTPUT
  21. IF (@@ERROR = 0 AND @LS_Add_RetCode = 0)
  22. BEGIN
  24. DECLARE @LS_SecondaryCopyJobScheduleUID    As uniqueidentifier
  25. DECLARE @LS_SecondaryCopyJobScheduleID    AS int
  28. EXEC msdb.dbo.sp_add_schedule
  29.         @schedule_name =N'DefaultCopyJobSchedule'
  30.         ,@enabled = 1
  31.         ,@freq_type = 4
  32.         ,@freq_interval = 1
  33.         ,@freq_subday_type = 4
  34.         ,@freq_subday_interval = 15
  35.         ,@freq_recurrence_factor = 0
  36.         ,@active_start_date = 20111104
  37.         ,@active_end_date = 99991231
  38.         ,@active_start_time = 0
  39.         ,@active_end_time = 235900
  40.         ,@schedule_uid = @LS_SecondaryCopyJobScheduleUID OUTPUT
  41.         ,@schedule_id = @LS_SecondaryCopyJobScheduleID OUTPUT
  43. EXEC msdb.dbo.sp_attach_schedule
  44.         @job_id = @LS_Secondary__CopyJobId
  45.         ,@schedule_id = @LS_SecondaryCopyJobScheduleID  
  47. DECLARE @LS_SecondaryRestoreJobScheduleUID    As uniqueidentifier
  48. DECLARE @LS_SecondaryRestoreJobScheduleID    AS int
  51. EXEC msdb.dbo.sp_add_schedule
  52.         @schedule_name =N'DefaultRestoreJobSchedule'
  53.         ,@enabled = 1
  54.         ,@freq_type = 4
  55.         ,@freq_interval = 1
  56.         ,@freq_subday_type = 4
  57.         ,@freq_subday_interval = 3
  58.         ,@freq_recurrence_factor = 0
  59.         ,@active_start_date = 20111104
  60.         ,@active_end_date = 99991231
  61.         ,@active_start_time = 0
  62.         ,@active_end_time = 235900
  63.         ,@schedule_uid = @LS_SecondaryRestoreJobScheduleUID OUTPUT
  64.         ,@schedule_id = @LS_SecondaryRestoreJobScheduleID OUTPUT
  66. EXEC msdb.dbo.sp_attach_schedule
  67.         @job_id = @LS_Secondary__RestoreJobId
  68.         ,@schedule_id = @LS_SecondaryRestoreJobScheduleID  
  71. END
  74. DECLARE @LS_Add_RetCode2    As int
  77. IF (@@ERROR = 0 AND @LS_Add_RetCode = 0)
  78. BEGIN
  80. EXEC @LS_Add_RetCode2 = master.dbo.sp_add_log_shipping_secondary_database
  81.         @secondary_database = N'AdventureWorks'
  82.         ,@primary_server = N'sg11\MSSQL2008R2'
  83.         ,@primary_database = N'AdventureWorks'
  84.         ,@restore_delay = 0
  85.         ,@restore_mode = 0
  86.         ,@disconnect_users    = 0
  87.         ,@restore_threshold = 6   
  88.         ,@threshold_alert_enabled = 1
  89.         ,@history_retention_period    = 5760
  90.         ,@overwrite = 1
  92. END
  95. IF (@@error = 0 AND @LS_Add_RetCode = 0)
  96. BEGIN
  98. EXEC msdb.dbo.sp_update_job
  99.         @job_id = @LS_Secondary__CopyJobId
  100.         ,@enabled = 1
  102. EXEC msdb.dbo.sp_update_job
  103.         @job_id = @LS_Secondary__RestoreJobId
  104.         ,@enabled = 1
  106. END

Before the log shipping job actually start working, we will need to restore the database from primary to secondary with NONRECOVERY option. 

How to check which database has the log shipping set up?

msdb.log_shipping_primary_databases or  sp_help_log_shipping_primary_database

  1. Use msdb;
  2. select * from log_shipping_primary_databases
  1. use master
  2. go
  3. sp_help_log_shipping_primary_database  @database=N'AdventureWorks'


More store procedure or system table can be found here


Remove Log shipping

On the primary server

Use On the primary server, sp_help_log_shipping_primary_secondary to exam the log shipping secondary db setup and execute sp_delete_log_shipping_primary_secondary to delete the information about the secondary database from the primary server.

  1. use master;
  2. go
  3. sp_help_log_shipping_primary_secondary [AdventureWorks]
  4. go
  5. sp_delete_log_shipping_primary_secondary @primary_database= [AdventureWorks] , @secondary_database = [AdventureWorks],@secondary_server = [SG13\SG13MSSQL]
  6. go

Use sp_help_log_shipping_primary_database  to exame the primary database setup and use sp_delete_log_shipping_primary_database to remove the primary database being as log shipping database. this will remove the backup log job too.

  1. use master;
  2. go
  3. sp_help_log_shipping_primary_database [AdventureWorks]
  4. go
  5. sp_delete_log_shipping_primary_database [AdventureWorks]

On the secondary server, use sp_help_log_shipping_secondary_database to exam the secondary information and use sp_help_log_shipping_secondary_database  to remove it. this will remove the restore job and copy job too.

  1. sp_help_log_shipping_secondary_database [AdventureWorks]
  2. go
  3. sp_delete_log_shipping_secondary_database @secondary_database = [AdventureWorks]


Why I can not find the log shipping option in the database property within the SSMS?

The log shipping and database mirror is only support in the non express version. You wont be able to find this option in the SQL Express SSMS.

From SQL Express SSMS

image From SQL Server Enterprise SSMS




Log Shipping Overview


SQL Panda Copyright © 2011 - |- Template created by O Pregador - |- Powered by Blogger Templates