>Exchange Server 2003 Remote Wipe/MobileAdmin problem

>When you deploy Windows Mobile on an Exchange environment, one of the things that you may want to configure is remote wipe. This will enable administrators to wipe the device over-the-air. This is very helpful in cases where a Windows Mobile device is lost or stolen and you want to simply protect company-related data like emails and attachments. While I was testing a device with an Exchange installation, I encountered an error on the site which does the remote wipe.

Server Error in ‘/MobileAdmin’ Application.

The remote server returned an error: (403) Forbidden.
Description: An unhandled exception occurred during the execution of the
current web request. Please review the stack trace for more information
about the error and where it originated in the code.

Exception Details: System.Net.WebException: The remote server returned an
error: (403) Forbidden.

Now from the looks of it, this seems like a permission/security error on the site. Either this is a web application permission problem or something else. I found this site which solved my problem. It basically recommends that you uncheck the “Require secure channel (SSL)” option on the exadmin virtual directory. This did the trick. Afterwhich, I was able to do a remote wipe on the device I was testing. The only worry I have is that now this web application is no longer secured unless you block access from the public Internet

>It’s officially unofficial – DBCC PAGE

>If you search for information about the DBCC PAGE command, the only Microsoft official document you’ll find refers to a SQL Server version older than v7. One blog post I found was from Paul Randall of the SQL Server Storage Engine Team which explains how to use DBCC PAGE in conjunction to using DBCC CHECKDB. DBCC PAGE is used to obtain the data that is contained in a SQL Server page structure. Before you can really take advantage of this tool, you need to run the DBCC TRACEON(3604).

Your poor-man’s SQL Server Log Shipping

Transaction Log Shipping is a disaster recovery (sometimes called data recovery) option in SQL Server where you generate transaction log backups in the source database, copy them over to a remote SQL Server instance and restore them in read-only, standby mode. This feature is available in Enterprise Edition for SQL Server 2000 while SQL Server 2005 has this feature even in the Standard and Workgroup Editions. But for those who are using editions other than those specified above, they do not have any other options except to do it outside of the “supported” scenarios. It is important to understand what transaction log shipping does so that we can come up with a process which can implement the technology. Notice how I mentioned “process” as this is more important than the technology itself. Transaction log shipping consists of three steps

  1. Backup the transaction log within a specified interval, say every 15 minutes
  2. Copy the transaction log backup from the primary server to the standby server
  3. Restore the copied transaction log backup while setting the database on read-only, standby mode

Understanding these steps will help us create an automated job which involves all of these process. The first thing that you need to do is to create a full backup of the database which you will be configuring for log shipping and restore the backup on the standby SQL Server. Make sure that the restore options for the database should be read-only and standby with no recovery. This makes sure that we can restore additional transaction logs later in the process. After we managed to restore the database backup on the standby server, we are now ready to configure log shipping. Here is a list of what we need for this process:

  1. ROBOCOPY – this is a command-line file replication tool available in the Windows Resource Kit tools. This copies files from a source to a destination and resumes interrupted replications. It also copies NTFS permissions.
  2. Shared folder – you should share the folder which will contain the transaction log backups and make sure that you have at least read-only access.
  3. Domain which will have the primary and standby servers as member servers
  4. Domain account which has dbo permissions on the database which you will be configuring for log shipping. We will use this account to copy and restore the transaction log backups from the primary server to the standby server.

Now, we’re ready to configure the database for log shipping. On the primary server, the first thing you need to do is create a database maintenance plan that generates transaction log backups. Make sure that you do not have any other transaction log backups running as this will break the log sequence which is necessary to restore the transaction logs. This process is as simple as going through the database maintenance plan wizard to create transaction log backups (unless you have MSDE to configure for log shipping, then, we really need to create a BACKUP LOG script specifically to do this). Next, we share the folder which will contain the transaction log backups. This should be accessible from the standby server, either via IP address or DNS name. On the standby server, copy the robocopy.exe file on a local folder. Then we’ll create a batch file that will call the robocopy.exe utility, passing the corresponding parameters. Let’s call the batch file LogShipping.bat. The batch file will contain the following commands

ROBOCOPY \primary_servershared_folder E:LogShipfolder /COPY:DATSO /MIR

I have discussed ROBOCOPY in more detail in this blog post. This line will be responsible for copying the transaction log backups from the primary server to the standby server and should be run in the standby server. Next, we add another line on the batch file to call a VBScript code which will be responsible for restoring the transaction log backups on the standby server based on time stamps. I have created a VBScript which accepts two parameters; the folder on which the transaction log backups were copied and the name of the database. Here is the command to call the VBScript which we will insert after the ROBOCOPY command.

RESTORE_LOG.vbs E:LogShipFolder databaseName

The contents of the RESTORE_LOG.vbs script is shown below

‘This script does a custom log shipping job using ROBOCOPY and VBScript
‘With FileSystemObject querying the sopecified folder for files created
‘within the past 15 minutes or less and generates a TSQL RESTORE LOG command
‘which is executed after the ROBOCOPY script
‘Syntax: RESTORE_LOG.vbs folder databaseName

On Error Resume Next

Dim fso, folder, files, sFolder, sFolderTarget, strParentFolder, strDatabaseName

Dim objShell

Set fso = CreateObject(“Scripting.FileSystemObject”)
Set objFSO = CreateObject(“Scripting.FileSystemObject”)


sFolder = strParentFolder & strDatabaseName

Set folder = fso.GetFolder(sFolder)
Set files = folder.Files

SET objShell = CreateObject(“Wscript.Shell”)

For each itemFiles In files

a=sFolder & “” & itemFiles.Name

‘retrieve file extension

b = fso.GetExtensionName(a)

‘check if the file extension is TRN

If uCase(b)=”TRN” Then

‘check for DateCreated attribute of file and compare with current date/time

If (DateDiff(“N”, itemFiles.DateCreated, Now) <=15) Then ‘Create the file to contain the script If (objFSO.FileExists(“E:LogShipFolderscriptsSQL” & strDatabaseName & “.sql”)) Then objFSO.DeleteFile (“E:LogShipFolderscriptsSQL” & strDatabaseName & “.sql”) End If Set objMyFile = objFSO.CreateTextFile(“E:LogShipFolderscriptsSQL” & strDatabaseName & “.sql”, True) str1=”RESTORE LOG ” & strDatabaseName str2=”FROM DISK='” & a & “‘” str3=”WITH STANDBY=’E:LogShipFolderUNDOUNDO_” & strDatabaseName & “_ARCHIVE.DAT’,” str4=”DBO_ONLY” objMyFile.WriteLine (str1) objMyFile.WriteLine (str2) objMyFile.WriteLine (str3) objMyFile.WriteLine (str4) objMyFile.Close Set objFSO = Nothing Set objMyFile = Nothing ‘Run an OSQL command that uses a RESTORE LOG WITH MOVE, STANDBY objShell.Run(“osql -SinstanceName -E -iE:LogShipFolderscriptsSQL” & strDatabaseName & “.sql -oE:LogShipFolderscriptsSQL” & strDatabaseName & “_results.txt”) End If End If Next

SET folder = NOTHING
SET objShell = NOTHING

The script is self-explanatory with all the comments. I am passing the folder location where the transaction log backups are being copied into and the database name. Notice that I usually structure my file system in such a way as to keep everything in order:

  • E:LogShipFolder – location of the transaction log backups, separated by database name
  • E:LogShipFolderUNDO – location of undo files which I specify as part of the RESTORE LOG command. Each UNDO file is identified by the database name as it’s prefix
  • E:LogShipFolderscript – location of all the VBScripts, batch files and EXE files I am using for this process. The RESTORE_LOG.vbs is stored in this folder
  • E:LogShipFolderscriptSQL – location of the SQL script files which RESTORE_LOG.vbs generates containing the RESTORE LOG command. This SQL script file will be called by a command-line command containing osql.exe (or sqlcmd.exe for SQL Server 2005), the command-line utility for SQL Server. This will also store the results file which I am generating using the osql.exe command-line utility. The results file is simply for records and/or troubleshooting purposes

After creating this VBScript and adding a line in the batch file to call this script, you are now ready to create a Scheduled Task in Windows to automate this process (although you can also do this in SQL Server Agent as a job). Create a Scheduled Task in Windows which will call the LogShipping.bat batch file. Now this will be very tricky. You need to make sure that the batch file will execute after the time it takes to generate the transaction log backup in the primary server but before the new one starts. We do have to monitor this after implementation as the transaction log backup time in the primary server may increase due to increased transaction which means increased file size resulting to increased file transfer time. To illustrate, if we enabled transaction log backup in the primary server to run every 15 minutes starting from 12:00AM, the log backups will be generated in the 15-minute sequence (12:00AM, 12:15AM, 12:30AM, etc.) Now, on the standby server, the scheduled task to call the LogShipping.bat batch file should be scheduled to run every 15 minutes but after 12:00AM. To be safe, it should be in the two-thirds of the time, say 12:10AM. This would be ample time for a medium-sized database with average-to-high number of transactions to generate the log backups. Five minutes to copy the log backups from the primary to the standby and restoring them would probably be a trial-end-error process to find the appropriate timing. So, when the log backup job runs on the primary at 12:00AM, your Scheduled Task on the standby server should be scheduled to execute at 12:10AM. When you create the Scheduled Task, make sure that the domain account that you will use has the appropriate rights to copy from the shared folder and dbo privileges to restore the log backups. You also need to take into account password changes in the acount as this will cause this job to fail.

That’s about it. You have successfully implemented the “unsupported” transaction log shipping which you can do in all editions of SQL Server. But since it is “unsupported,” you do not have the benefits of easy-to-use features like Log Shipping Monitor, Threshold Settings, etc. You also need to worry about the “what-ifs” time constraints, like what if the transaction log backup job for one database took more than 15 minutes to complete due to large transactions (and of course the corresponding copy job will also take longer.) The Log Shipping Monitor can be resolved by using a TSQL query to retrieve the successful restore for a specified database. But, of course, you need to write your own script to do this. A sample script to query the MSDB database for successful restores is given below:

SELECT [Name], Backup_Start_Date, Backup_Finish_Date, [Description], First_LSN, Last_LSN, *
FROM msdb.dbo.backupset
AS a JOIN msdb.dbo.backupmediafamily AS b ON a.media_set_id = b.media_set_id WHERE database_name = ‘databaseName’

>I wish this was available years back – SQL Server 2005 Always On Technologies Part 2

>One of the most underestimated feature of SQL Server is the FULL RECOVERY model in the database options. Database recovery models determine how much data loss is acceptable in case of a failure and what types of backup and restore functions are allowed. This has been an option which you can trace back from the earlier versions of SQL Server. We will not be going in detail with the different recovery models but it would be best to know that if you have a mission-critical database, it should be configured to use the FULL recovery model. A full description of the different recovery models for SQL Server 2000 (whch is also applicable to 2005) can be found in this MSDN article. Let’s look at the same scenario we had in the first part of this series. The [Order Details] table, for example, in the Northwind database can be dropped successfully any time even if referential integrity is enforced in the database (try running the DROP TABLE [Order Details] comand and see what happens). Now, as I have mentioned, a mission-critical database should be configured with FULL recovery model combined with regular backups, either full database backups or a combination of full with differential and/or transaction log backups. So, I am assumming that you have at least a full database backup on a regular basis.

  1. Contain the disaster
  2. The first thing that you need to do when a user accidentally drops a table (or any object) in your database is to contain the disaster. You can do this by setting the database option to only be accessed by the dbo. You can do this either thru SQL Server Management Studio or running the ALTER DATABASE command
    ALTER DATABASE Northwind

    The WITH ROLLBACK IMMEDIATE option in the ALTER DATABASE command specifies to rollback all active transactions immediately while setting the access mode to RESTRICTED_USER, which in this case is the dbo. This will prevent any other user from connecting to the database, of course, with the goal of getting the database back online with the least amount of downtime.

  3. Backup the transaction log
  4. Since you have configured your database to be in FULL recovery model, you can backup what we call the “tail of the log” or the active portion of the transaction log. This will ive you the option to do a point-in-time restoration, the time before the DROP TABLE statement was executed. Since the database is technically offline at this point in time, you do not need to issue a WITH NO_TRUNCATE option in your BACKUP LOG command
    BACKUP LOG Northwind
    TO DISK = N‘D:NorthwindLOG.TRN’
    WITH NAME = N‘Transaction Log Backup’, STATS = 10

    You will need both your full database backup and this transaction log backup to recover the dropped table.

  5. Restore the database in RESTRICTED_USER mode from the full database backup and the transaction log backup
    Since you have already got a full database and tail-of-the-log backup, you can now restore your database to a point before the table was dropped. This is going to be a bit tricky as you do not know exactly when the table was dropped (even if the developer called you up when the event happened, it would probably take minutes to log on to the server, open up your favorite query tool – be it Query Analyzer or Management Studio – and go thru steps 1 and 2 above). To make it a bit faster, just restore to a known good point-in-time just to get the database back online the soonest time possible and worry about trying to find when the table was dropped later.
    USE master

    FROM DISK = N’D:NorthwindFull.bak

    RESTORE LOG Northwind
    FROM DISK = N’D:NorthwindLOG.TRN
    — use a “known good” point in time
    STOPAT = ‘2007-09-12 10:13:03.420’, RECOVERY

    We are assumming here that the [Order Details] table was dropped before September 12, 2007 10:14AM, that’s why we are taking 10:13:03.420 as a point-in-time restore point. Remember that if we restore to a point where the table has already been dropped, we have to start the restore process again. But we are not done yet. We don’t know how many transactions have been committed between 10:13:03.420 and 10:14:00.000. On highly transactional databases, say averaging around 500 transactions per second or higher, you come up with around 28,500 transactions within that span of time. If a single transaction is about $0.05, you’ve lost about $1,425 within that span of time (and now you know the reason why database engineers and administrators get paid well, especially in financial institutions). Notice that the database was restored with the RESTRICTED_USER option. This means that the database is still inaccessible from external users. This is because we are not done yet.

  6. Create a snapshot of the restored database and create a gap to store the recovered records later
  7. We create a snapshot of the restored database which we will use for analyzing the records that need to be restored after getting our database back online. Remember, we only restored the database from a good point in time but probably way back before the table was dropped. That small difference is what we need to restore once the database is online. To create the database snapshot, we use the CREATE DATABASE command
    CREATE DATABASE Northwind_RestorePointSnapshot
    ( NAME = N‘Northwind’,
    FILENAME = N‘C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATANorthwind_RestorePontSnapshot.mdf_snap’)
    AS SNAPSHOT OF [Northwind]

    The snapshot should be in the same instance as the reference database. I will not be digging deeper into database snapshots in this entry but will probably spend some time in the future discussing database snapshots in SQL Server 2005. To create a gap in the [Order Details] table to store the recovered records later, we need to execute a DBCC CHECKIDENT command on the [Order Details] table. This is assumming we have an IDENTITY column defined in the table, which is usually the case (although the [Order Details] table in the Northwind database does not have an identity column, let’s assume it does. Or better yet, add an identity column. I added one just for this purpose named OrdDetID).
    DBCC CHECKIDENT (‘[Order Details]’) — we will get the number of rows we have
    USE Northwind
    — we will use a value big enough to insert the recovered records later
    DBCC CHECKIDENT (‘[Order Details]’, RESEED, 4200000)

    The RESEED option specifies that the current identity value should be corrected. Let’s say we get a value of 4000000 in the first DBCC CHECKIDENT command, we can create a gap of about 200000 to insert the recovered records later on (which explains the value of 4200000). We don’t really care how big the gap is, what’s more important is we have enough space to insert the records and we can get the database back online. The next DBCC CHECKIDENT command is to be executed on the original Northwind database as this is where we need to create the gap. Once we’re done with this, we can now get the database back online.

  8. Get the database back online
  9. By now, we are ready to get the database back online. We can run the ALTER DATABASE command or use SQL Server Management Studio to do this.
    USE master

    ALTER DATABASE Northwind

    Now that the database is back online, we can start investigating the damaged data.

  10. Restore the database with a different name for investigation
    Since the production database is up and running, we can now go back and spend all the time we have to investigate the gap between what is and what should be in the production. Remember that we only restored the production database to a good known point in time. We have to identify the records that are still in the tail-of-the log than needs to be brought back to the production database. To restore the database backup to a different name,
    USE master

    RESTORE DATABASE [Northwind_Investigate]
    FROM DISK = N‘D:InvestigateNorthwindBackup.bak’
    WITH MOVE N’SalesDBData’ TO N‘C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATANorthwindData_Investigate.mdf’,
    MOVE N‘SalesDBLog’ TO N‘C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATANorthwindLog_Investigate.ldf’,
    STANDBY = N‘D:InvestigateNorthwind_UNDO.bak’,
    STOPAT = ‘2007-09-12 10:13:03.420’, STATS

    RESTORE LOG [Northwind_Investigate]
    FROM DISK = N‘D:InvestigateNorthwindBackup.TRN’
    WITH STANDBY = N‘D:InvestigateNorthwind_UNDO.bak’,
    STOPAT =‘2007-09-12 10:13:03.420’, STATS

    This is exactly what we did in Step #2 except that we are now restoring on a different database. We do the RESTORE LOG command until we come to a point-in-time before the table was dropped. This is a tricky one as it requires patience and attention to detail. If you just went beyond the point-in-time where the table was dropped, you will have to start over again. Play around with the value in the STOPAT option, moving forward either on a per second, per minute or per 5-minute intervals. You decide on the value. The ultimate test to find out whether or not the table has already been dropped is to execute a SELECT COUNT(*) FROM [Order Details] command. If this returned an error stating that the object does not exist, then you have restored to a point-in-time after the table was dropped and you have to repeat the process. Once you know you’re done, you’re ready to extract those records and push them to the production database.

  11. Identify the records missing in the production database
    Since we have created a gap in the production database using the RESEED option, we can identify how many records are in the investigate database but not in the production. We’ll use the value we specified in Step #4, 4200000

    –identify the number of records in production database before RESEED
    FROM Northwind.dbo.[OrderDetails] WHERE OrdDetID<4200000

    –identify the number of records in the investigate database which are not in production
    –if we get 4150000 from the first query, we use this value in the WHERE clause

    FROM Northwind_Investigate.dbo.[OrderDetails] WHERE OrdDetID>4150000

    We’ll get an idea on the number of records missing in the production database using this aproach. There are a lot of ways to push these records back in the production database and we’ll look at some of them

  12. Push the records back from the investigate to the production database
    There are a lot of ways to do this. Let’s look at some of them. One way is to do an INSERT..SELECT command from the investigate database to the production database. A sample query to do this is shown below.

    –recover from missing rows
    SET IDENTITY_INSERT Northwind.dbo.[Order Details] ON

    INSERT Northwind.dbo.[Order Details]
    SELECT *
    FROM [Northwind_Investigate].dbo.[Order Details] AS R
    WHERE R.OrdDetID > 4150000

    The problem with this approach is that you are assuming only INSERT statements were executed between the gap. In reality, we don’t really know. It could have been a couple of INSERTS, UPDATES and DELETES simultaneously occurring within this span of time which limits your assumption in using this approach. A more appropriate approach is to use the tablediff.exe utility. This utility is used to compare the data in two tables for non-convergence, and is particularly useful for troubleshooting non-convergence in a replication topology. Now you might be thinking otherwise as we do not have a replicated environment here. Well, we could think of it as one so we can use this utility. It can generate the corresponding INSERT, UPDATE and DELETE statements to execute to make replicated databases in sync. A full description of this utility is available in MSDN. Now, there are some limitations here. We don’t know exactly how many tables were affected in a specific disaster. In this particular scenario, we only dealt with a single table being dropped by a user. What if there were more than one table which I believe all of the database I know of have. Since the tablediff utility generates the SQL statements only for a single table, we can create a dynamic SQL statement which will query the sysobjects table (or the sys.objects table in SQL Server 2005) for the names of the tables and generate a tablediff command with all of these parameters derived from extracting the names of the tables. This will be an appropriate approach to synchronize the investigate database and the production database. There is a GUI tool created by the guys from SQLTeam.com for those who use this tool often.

You have seen how to take disaster recovery (sometimes known as data recovery) in another level in this article. As I always emphasize, the process is more important than the technology that’s why I highlighted the process in detail. The technology portion can be extracted from just about anywhere. Having a disaster recovery process is very important. Document your process and have a regular DR drill exercise to prepare you and your team when disaster strikes. In the next article in this series, we will look at other disaster recovery options available in SQL Server 2005

>I wish this was available years back – SQL Server 2005 Always On Technologies Part 1

>In response to requests made after I did a presentation on TechEd Asia 2007 on SQL Server 2005 Always On Technologies, I’ve made the content available on my blog.

In as much as we want to, there are not as many options in SQL Server 2000 to address disaster recovery caused by human errors. And when we think about disaster recovery and high availability, we are often tempted to think about hardware solutions. With SQL Server 2005, we have a lot of options. This will be a series of blog entries discussion about the high availability options in SQL Server 2005. I would like to highlight, though, that disaster recovery is more a process than technology. We may have the available technologies at our disposal but if we do not have the right process, then, its practically useless. Prevention is one option. Let’s take a look at a simple scenario. Imagine a developer accidentally connecting to a production database and dropped a table. Now, if this table has implemented referential integrity and has foreign key relationships established on related tables, the related tables would definitely prohibit the execution of the DROP TABLE command due to referential integrity. But what about this table? Let’s say the Employees table is related to the Department table and a referential integrity is defined between them. We won’t be able to drop the Employees table because it references the Department table. But nobody is stopping us from dropping the Department table. The best way to prevent this scenario in SQL Server 2005 is to implement DDL(data definition language) triggers. We are all familiar with DML triggers in previous versions of SQL Server. DDL triggers are used to audit and/or prevent DDL statements at the server-level or at the database-level. You can prevent accidental data deletion (i.e., trapping and rolling back an attempt to drop a table) and unexpected schema changes. A benefit of using a DDL trigger is that it forms a part of an implicit transaction which caused the trigger to fire and, when necessary, rollback the statement when necessary. The data related to the event can be trapped even if the statement that caused the trigger to fire has been rolled back. This is beneficial if we want to audit who did what on either database or server level. A complete description of SQL Server 2005 DDL triggers can be found on the MSDN site.

Let me walkthrough a simple example on using a DDL trigger. Using the very familiar Northwind database (the Northwind database does not come with the default installation of SQL Server 2005. You can download the source from the Microsoft Download Center). First, create a table which will contain the metadata of the DDL query. We will use this to store those information for audit purposes

USE Northwind;

LoginName sysname NOT NULL,
UserName sysname NOT NULL,
PostTime datetime NOT NULL,
EventType nvarchar(100) NOT NULL,
DDLOp nvarchar(2000) NOT NULL);

Then, we will create a database-level DDL trigger.

@data = EVENTDATA()
RAISERROR (‘DDL Operations are prohibited on this production database. Please contact IT Division for proper change control procedures.’, 16, -1)
INSERT AuditDDLOperations (LoginName, UserName, PostTime, EventType, DDLOp)

VALUES (SYSTEM_USER, CURRENT_USER, GETDATE(), @data.value(‘(/EVENT_INSTANCE/EventType)[1]’, ‘nvarchar(100)’), @data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]’, ‘nvarchar(2000)’) )

The CREATE TRIGGER statement consists of (1) the scope, which in this case is a database-level scope, (2) WITH ENCRYPTION option to simply hide the trigger definition, (3) the event on which the trigger will be associated with, this case will be all DDL events at the database level, (4) the EVENTDATA() function which returns information about server or database events and is called when an event notification fires, and (5) an INSERT statement to store the details of the EVENTDATA() function inside the table we created in the previous script. Now, to test the trigger, simple run any DDL statement on the Northwind database

CREATE TABLE Foo(col1 int);
DROP TABLE AuditDDLOperations;

The previous statements are very simple DDL statements – one to create a table and one to drop a table. These statements will raise an error defined in the DDL trigger. To investigate the DDL statements and who executed them, we run a SELECT query against the table we created

SELECT * FROM AuditDDLOperations;

The key thing here is implementing DDL triggers as part of your database implementation strategy to minimize disaster caused by human errors. In the next part of this series, I’ll talk about how you can recover from a dropped database in cases where you don’t have a DDL trigger defined in your database