Preloading View Model Data into a Knockout Model (Without the extra JSON call) in ASP.Net MVC

I’ve recently been playing around with Knockout and working it in with to my MVC apps.  It dramatically simplifies some of the complexities of keeping an interface up to date when performing JavaScript changes.

One of the very first issues I ran into was, how do I load in some initial data from C#?  The advertised and I assume the easiest way is to just make an extra get JSON call.  But what if I just need a few values to populate a dropdown?  Say a list of States.  Personally I think it’s a waste of an extra JSON call when it can just be embedded in the page itself. 

So let’s take a look at the C# view model:

public class Viewmodel
{
     public IEnumerable States {get; set;}
}

public class State
{
     public string Abbr {get; set;}
     public string Name {get; set;}
}

And our knockout model:

function AppViewModel() {
    var self = this;
    self.states = ko.observableArray();
}

ko.applyBindings(new AppViewModel());

Our goals is to populate the self.states member with our list from the C# view model.  Sure you can do the following code:

function AppViewModel() {
    var self = this;
    self.states = ko.observableArray();
}

var model = ko.applyBindings(new AppViewModel());

// url that gets the state object in json format
var url = 'some url';
$getJSON(url, function(results) {
     $.each(results, function(item, index){
          model.states.push(item);
     });
});

But wouldn’t it nice if we could just initialize the array without having to go back to the server?  Especially for such a small list.  There are times when the above code makes sense.  Consider the following:

var depts = @Html.Raw(new System.Web.Script.Serialization.JavaScriptSerializer().Serialize(Model.States.Select(a => new { id= a.Abbr, name=a.Name })));

function AppViewModel() {
    var self = this;
    self.states = ko.observableArray(states);
}

ko.applyBindings(new AppViewModel());

(Yes it’s using Razor syntax, but it can just as easily use the older syntax).  The line of code essentially takes the array of states and creates the raw javascript array as if it was hard coded in the page, which in turn can be easily loaded into an observable array.  It works fantastically with small lists (and can with large lists, but consider load times before deciding which way to go.

That’s about it.

Windows Azure Backups with Bacpac – Automating Exports with Data Sync enabled (Part 4)

Here comes the final installment of my azure backup series.  This time we’ll be looking at doing backups on databases with Data Sync enabled.

Since I originally wrote the first post and made plans to do the four posts in the series, Microsoft has since fixed the issue with restoring a database with Data Sync enabled.  Best as I can tell, there was an issue with merge statements added as triggers to all the tables.  There are various reasons why you’d want to perform backups with all the Data Sync data, but the main reason is that in the event you restore the database you’ll have a full set of data available (even though it might be stale) and not have to wait for Data Sync to restore the data.

My solution presented below strips out all of the Data Sync data, so the resulting backup is only your production data and not the data that is most likely stored on-site (in your on premises servers).  Removing the data, could potentially make the backup/restore faster (really depends on amount of data) and more importantly makes the Data Sync restore a lot quicker.  In my tests with a bit less than 1gb of data, perform the initial Data Sync with the stale data took > 8hrs and still did not complete.  In comparison, a database with Data Sync data stripped out, resulted in a restore of < 1hr.  In the event of a catastrophic failure, an extra hour of delay for up to date information is my preference.

Backup With Data Sync

The function itself is quite long ~150 lines of code, but essentially it goes through and removes tables, triggers, types and schema added when you provision a database.  The link below will take you right to the line where the function starts.

https://github.com/anlai/AzureDbBackup/blob/master/AzureBackup/AzureStorageService.cs#L236

In the service I’ve written, the following function creates a copy of the database, removes Data Sync objects, exports the database to blob storage and drops the temporary copy.  It can be tedious to do it yourself since you have to sit there and wait for the copy operation and export operations which can take a bit of time.

public string BackupDataSync(string database, out string filename)
{
    var srcDatabase = database;
    var tmpDatabase = database + "Backup";

    var masterDbConnectionString = string.Format(ConnString, _serverName, "master", _sqlUsername, _sqlPassword);
    var tmpDbConnectionString = string.Format(ConnString, _serverName, tmpDatabase, _sqlUsername, _sqlPassword);

    // make a copy that we can alter
    CreateCopy(srcDatabase, tmpDatabase, masterDbConnectionString);

    // remove the datasync tables/triggers/sprocs
    RemoveDataSync(tmpDbConnectionString);

    // export the copy to blob storage
    var reqId = Backup(tmpDatabase, null, out filename);

    // keep checking until the export is complete
    do
    {
        var response = GetStatus(reqId);

        if (response == "Completed") break;

        // wait 30 seconds before checking again
        Thread.Sleep(30000);

    } while (true);

    // drop the temporary srcDatabase
    CleanupTmp(tmpDatabase, masterDbConnectionString);

    return reqId;
}

The last bit of getting this to work is to simply initialize the class, call the export and then to perform blob cleanup.  You will want to change the DacServiceUrl parameter to match the data center you’ll be using.  The current urls are coded into the sample under the DacServiceUrls class, so you can use the following options (so long as the urls don’t change):

  • DacServiceUrls.NorthCentralUs
  • DacServiceUrls.SouthCentralUs
  • DacServiceUrls.NorthEurope
  • DacServiceUrls.WestEurope
  • DacServiceUrls.EastAsia
  • DacServiceUrls.SoutheastAsia
var service = new AzureStorageService(
    ConfigurationManager.AppSettings["AzureServerName"],
    ConfigurationManager.AppSettings["AzureUserName"],
    ConfigurationManager.AppSettings["AzurePassword"],
    ConfigurationManager.AppSettings["AzureStorageAccountName"],
    ConfigurationManager.AppSettings["AzureStorageKey"],
    ConfigurationManager.AppSettings["AzureBlobContainer"],
    DacServiceUrls.NorthCentralUs	
    );

// perform the backup
string filename;    // filename generated for the bacpac backup
service.BackupDataSync("DatabaseName", out filename);

// cleanup the storage container
service.BlobCleanup();

Azure Backup Sample Application

I’ve written a sample application, that is a basic console application for testing purposes.  You can download the source from my github project AzureDbBackup.  It is however missing the AppConfig file (mostly because I didn’t trust myself to not check-in the credentials for my Azure account.  So you’ll need to add it in, you can copy and paste from below and fill in your information:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appsettings>
    <add value="server.database.windows.net" key="AzureServerName"></add>
    <add value="server user name" key="AzureUserName"></add>
    <add value="password" key="AzurePassword"></add>
    <add value="blank" key="AzureStorageAccountName"></add>
    <add value="blank" key="AzureStorageKey"></add>
    <add value="blank" key="AzureBlobContainer"></add>
  </appsettings>
</configuration>
  • AzureServerName – The address to your Azure SQL instance.
  • AzureUserName – Username to log into your Azure SQL instance.
  • AzurePassword – Password to log into your Azure SQL instance.
  • AzureStorageAccountName – Storage Account Name
  • AzureStorageKey – Storage key (primary or secondary) to access storage account.
  • AzureBlobContainer – Container inside the above storage account.

My recommendation is that you give the console run a shot first before integrating it into your application.  If you have any suggestions/fixes, I’m happy to integrate them.

Integrating into Your Application

Sure you can run the above console application on an on-premises server and use windows Task scheduler to run it every night.  Or you can integrate it into something running on Azure to run the code on a schedule for you.  If you are running your application on more than  one instance (you probably don’t want to do it on your app), but maybe in a worker role.  But essentially the same integration steps should apply.

  1. Take the AzureStorageService.cs file and drop it into your application.   Or you can download the solution and build the class library and pull the dll into your project (doing it this way will also give you the dac service urls for the data centers).
  2. Next you’ll have to setup a scheduler to perform the backups on a regular interval.  We’re used the Quartz.Net Job Scheduler to run it on a nightly backup job.  Below is a sample setup of a scheduled job using the Quartz scheduler.
  3. It should do it’s magic every night, and you can collect the kudos.
public class DatabaseBackup : Job
{
    public static void Schedule()
    {
        var jobDetails = JobBuilder.Create().Build();

        var storageAccountName = ConfigurationManager.AppSettings["AzureStorageAccountName"];
        var serverName = ConfigurationManager.AppSettings["AzureServerName"];
        var username = ConfigurationManager.AppSettings["AzureUserName"];
        var password = ConfigurationManager.AppSettings["AzurePassword"];
        var storageKey = ConfigurationManager.AppSettings["AzureStorageKey"];
        var blobContainer = ConfigurationManager.AppSettings["AzureBlobContainer"];

        var nightly =
            TriggerBuilder.Create()
                          .ForJob(jobDetails)
                          .WithSchedule(CronScheduleBuilder.DailyAtHourAndMinute(1, 30).InPacificTimeZone())
                          .UsingJobData("StorageAccountName", storageAccountName)
                          .UsingJobData("AzureServerName", serverName)
                          .UsingJobData("AzureUserName", username)
                          .UsingJobData("AzurePassword", password)
                          .UsingJobData("AzureStorageKey", storageKey)
                          .UsingJobData("AzureBlobContainer", blobContainer)
                          .StartNow()
                          .Build();

        var sched = StdSchedulerFactory.GetDefaultScheduler();
        sched.ScheduleJob(jobDetails, nightly);
        sched.Start();
    }

    public override void ExecuteJob(IJobExecutionContext context)
    {
        var storageAccountName = context.MergedJobDataMap["StorageAccountName"] as string;
        var serverName = context.MergedJobDataMap["AzureServerName"] as string;
        var username = context.MergedJobDataMap["AzureUserName"] as string;
        var password = context.MergedJobDataMap["AzurePassword"] as string;
        var storageKey = context.MergedJobDataMap["AzureStorageKey"] as string;
        var blobContainer = context.MergedJobDataMap["AzureBlobContainer"] as string;

        // initialize the service
        var azureService = new AzureStorageService(serverName, username, password, storageAccountName, storageKey, blobContainer, DacServiceUrls.NorthCentralUs);

        // make the commands for backup
        string filename;
        var reqId = azureService.BackupDataSync("PrePurchasing", out filename);

        // clean up the blob
        azureService.BlobCleanup();
    }
}

Links:

Azure Backups Series:

Windows Azure Backups with Bacpac – Automating Cleanup (Part 3)

Welcome to part 3 of my azure backup series.  We’ll be looking at automating cleanup of your blob storage.  If you will be using this method of doing backups to blob storage (taking snapshots of the database every XX hours/days), eventually you’ll have a large build up of old backups that will be outdated.  So rather than logging in to the azure console every few days, why not automate the cleanup of your outdated bacpac files?

The following is a list of settings that are needed to get the cleanup working.

  • _storageAccountName – The name of the storage service account
  • _storageKey – One of the access keys for the storage account (primary or secondary, doesn’t matter).  Just make sure you don’t check it in to a repository!
  • _storageContainer – Name of the storage container you’d like your bacpac files to be put into.
  • _cleanupThreshold – # of days before a backup is considered outdated and deleted.  This needs to be a negative number, I do the conversion in my constructor to ensure it’s negative.

To do the overall cleanup, I take advantage of the managed APIs for the Azure Storage to get a list of all blobs in storage and then to delete them out.

First we want to setup a client, it’s fairly straight forward, we just pass in a connection string and it does the rest.

var storageAccount = CloudStorageAccount.Parse(
     string.Format(CloudStorageconnectionString, _storageAccountName, _storageKey));
var client = storageAccount.CreateCloudBlobClient();

Next, we need to tell the client which container we want to look at and get a list of all it’s contents.  Then we want to filter down the list so that it only shows blobs that are past the threshold (the ones we want to delete).

var container = client.GetContainerReference(_storageContainer);
var blobs = container.ListBlobs(null, true);
var filtered = blobs.Where(a => a is CloudBlockBlob && 
     ((CloudBlockBlob)a).Properties.LastModified < DateTime.Now.AddDays(_cleanupThreshold))
     .ToList();

Finally, we make the calls to do the deletions.  The below is the basic call used to delete the files, but in my full code I return a list of those blobs that were deleted.

foreach(var item in filtered)
{
     var blob = (CloudBlockBlob)item;
     deleted.Add(blob.Name);
     blob.Delete();
}

That’s fairly straight forward right?  Here is the code as a whole, this is an excerpt of the entire code but it’s what is necessary to do the cleanup.

Now sit back and pretend like you have to do your cleanups everyday to save the company money 🙂

Azure Backups Series:

Windows Azure Backups with Bacpac–Automating Exports (Part 2)

This is part 2 of my azure backups “series”.  This will go into doing exports of an Azure database to Bacpac file in blob storage.  Part one just went over a couple of options for backups and how to do it manually.

Today we’ll be looking at the simple case for exporting your databases.  By simple I mean you just have your database and no DataSync enabled on the database.  If you are in the situation where you are using DataSync, I’ve got a solution, it’s ugly but it works (you’ll have to wait for part 4).

The basic idea is that we’re going to use the API to command Azure to export the database to a storage container.

I based my code on the following blog by Colin Farr:

http://www.britishdeveloper.co.uk/2012/05/export-and-back-up-your-sql-azure.html

It’s a great article and goes over how to do it.  Honestly, my code doesn’t change a whole lot, so a lot of the credit goes to Colin on his excellent example.  I do give the option of doing selective export but that’s very minor.

This is only the function for performing the backup, I have rolled it into a whole class to be used with some clean up.

You basically pass in the name of the database, (optionally) a list of tables you’d like to export and it’ll give you back a status id and a filename that it saved the backup to.  All of the credentials are setup through the constructor (not shown), but you could easily hard code that into the function if you wanted.

That’s it for now.

Azure Backups Series:

Windows Azure Backups with Bacpac (Part 1)

Backing up or snapshotting a Windows Azure database seems like it should be a pretty straight forward affair and in reality it is, unless you have a more than simple situation.  Whether you just want to have a copy of your data locally or snapshots in time, it shouldn’t cost you an arm and a leg to maintain.

There are several solutions floating around out there as to how to accomplish backups.  Here are just a few that you can look into if you care to (these are just the ones that don’t involve paying a 3rd party vendor to implement):

  1. Database Copy
  2. Azure Data Sync
  3. Export to Bacpac
        You can read up on

Microsoft’s official suggestions here.

    Personally, I sort of have a problem with option 1 and 2.  They both require you to have another SQL server running (either an Azure or Local Database), bottom line is that if you are using an Azure database, you are essentially doubling your costs.

That leaves us with exporting to Bacpac file.  I like this option since it relies on the relatively cheap blob storage and gives you the option to just download the file and save it locally.

Manual Method

The simple way to export your database to Bacpac files, is to simply open up SQL Management Studio and follow these steps:

  1. Connect to your Windows Azure database server
  2. Right click on the database
  3. Select “Tasks”
  4. Select “Export to Data Tier Application”
  5. Under the Export Settings, select Windows Azure and fill out the credentials
  6. Celebrate!

You can also perform the export from the Azure Management Console:

  1. Log in to the Azure Portal http://windowsazure.com
  2. Select Database
  3. Click on Export, fill out credentials.

That was easy, right?  But who wants to actually do this manually every day or every 6 hours?

DataSync and Bacpac

There is one case that has been driving me up a wall.  Part of our database uses Data Sync to get some on-premises look up data into our Azure Database.  This brings up a problem, since DataSync adds some miscellaneous tables and stored procedures.  The stored procedures triggers added to your tables are incompatible with Bacpac and causes an error on import.  It doesn’t give an error on export, but the second you try to import it throws an error about semi-colons missing.

Microsoft seems to be aware of  this issue, but I’m not seeing any traction on a fix.  There is an open ticket about this.

Update 1/11/2013:  It would appear this issue has been resolved with restoring a bacpac file that previously had DataSync enabled.  However, when you try to re-enable DataSync to the database it can be extremely slow depending on the amount of data being synced.  With approximately 1 gb of sync data, it took more than 4 hrs before I gave up.  It really depends on your recovery situation.  If you must have the database restored ASAP and are not concerned about a little stale data you can do a plain restore and the original tables with data will still be there.  However I prefer to have the latest data from the DataSync available so in Part 4, I have a solution for that.

Automating Bacpac Export

Doing the exporting manually just isn’t an option for most of us. So in the coming posts I’ll be showing the following:

  • Part 1 : Basic Introduction to Azure Backups
  • Part 2 : Automating Exports
  • Part 3 : Automating Cleanup
  • Part 4 : Automating Exports with Data Sync enabled

SQL Merge Replication over SSL

I’ve spent the last few days pulling my hair out trying to get SQL Merge replication over SSL working on SQL Server 2008 R2.  The situation I have is basically I have a SQL server behind our corporate firewall with some data.  We are setting up a SQL VM on Azure and need to constantly replicate data out to the Azure server.  The only built-in option in SQL server is to perform merge replication with the help of an IIS server.

To understand the setup you can refer to this msdn article, it does a decent job so I’m going to retype it.

There are a few catches and gotcha’s, that drove me crazy and I had to scour the web to find answers on why things weren’t working.

These are the things that I used, it may not be best practices but it worked out in the end:

  • SQL Server 2008 R2 (with source data, inside corporate firewall) (Publisher)
  • SQL Server 2008 R2 Azure VM (server to have data replicated to) (Subscriber)
  • Windows 2008 R2 with IIS installed (WebServer)
  • SSL Cert for above web server (not self-signed)
  • Domain Account, used to access the publication
  • Local Account, on WebServer for IIS application pool
  • Publically available dns name (I’ll use https://replication.domain.com)
  • SQL Management Studio (SSMS)

Setup of WebServer

All the instructions I read really only covered configuring the website in IIS for replication, but no one really described what you needed from scratch.  I started with a fresh install of Windows 2008 R2.

  1. Add the “Web Server” role with the following features (in addition to default features):
    • Application Development > ISAPI Extensions, ISAPI Filters
    • Security > Basic Authentication
    • Management Tools > IIS Management Console, IIS Management Service
  2. Configure IIS for Web Synchronization, this is a great step by step at msdn.  While going through this guide, the 2 accounts they mention are the ones listed above in what you need.  The domain credentials are the ones you will use at the end to test the replisapi.dll connection, once basic authentication is enabled.

There were however 2 issues when following the msdn article.

  • If you are on a 64-bit machine, the replisapi.dll may not be the correct version, if you are receiving an “HTTP 500.0 – Internal Server Error”.  The two versions are different sizes and I believe are compiled for 64bit and 32bit.  Even if you follow the instructions and use the wizard, it still copied the wrong version in my case. I believe I had to copy the version fron the x86 directory.
    • C:\Program Files\Microsoft SQL Server\100\COM\
    • C:\Program Files (x86)\Microsoft SQL Server\100\COM\
  • If you get a plain page with “Access Denied” in when going to http://replication.domain.com/replisapi.dll?diag , it is most likely the fact that the credentials you use need to be an Administrator on the WebServer.  In my case, I was using the domain credentials, so I had to add those to the local Administrators group.
    Hopefully at this point, you can see the replisapi.dll?diag page and we can move on to configuring the replication.

Configuring Replication

Part of the trick wit this step is that you apparently have to setup and do the initial replication locally, you cannot setup replication over SSL until after you setup both the Publisher and Subscriber.  Once setup you simply change over the settings and theoretically it’s good to go.

This part of the setup is fairly straight forward.  On the publisher, open up replication and add a new “Local Publication”, be sure to select “Merge Replication”.  If you select the default location, make sure you share the directory and call it “ReplicationSnapshots”:

Snapshot Directory: C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\repldata\unc\[server_snapshot name]

Share Directory: C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\repldata

Share UNC Path: \\[servername]\ReplicationSnapshots

Be sure to add your domain credentials to the directory security and share permissions with read only permissions.

On the subscriber, if it’s off site (ex. SQL Azure VM), you’ll need to connect into your corporate network using vpn to do the initial configuration.  In SSMS, open up replication and add a new “Local Subscription”.  You will be pointing it to the publisher and selecting merge replication.  Once you finish the wizard, it will fail the first time because the default directory it is looking at for the snapshot is inaccessible.

For some reason or another it sets it up using the local path, instead of the unc path.  Simple fix, right click on the subscription and click on properties.  Scroll down to “Snapshot” section and change “Snapshot location” from default to “Alternate Folder” and fill in the “Snapshot folder” to the unc path shared above.  Once this is complete open up the “Synchronization Monitor” and run it, it should replicate the data to your subscriber.

Configure Web Synchronization

If you’ve made it this far, then you are almost done!  Now that the merge replications are working, we just need to point it to our WebServer and have it make it’s data requests through the server.

First we’ll configure the Publisher.  In SSMS, open up the replication folder and right click on the Local Publication you created earlier and select “Properties”.  Check the “Allow Subscribers to synchronize….” and fill in the address for the web server.

image

Next we’ll configure the subscriber.  It’s just as simple as with the publisher.  Open up a connection in SSMS to the subscriber and go to the replication folder.  Right click on select “Properties” on the subscription created before.  Scroll down to the “Web Synchronization” section and change “Use Web Synchronization” to true, fill in the Web server address, Change the Web Server Connection to use your domain credentials (it should change to “basic Authentication” as seen below).  You should be able to run the synchronization from the subscriber without vpn now.

image

Sit back and let the kudos roll in

Theoretically, if nothing funky goes wrong, you should be ready to go.

Update: So, I appeared to have overlooked one important detail.  In my case, my subscriber is a SQL Azure VM (using SQL authentication) and my publisher is a local VM using Integrated Authentication.  Subscriber is not part of the domain, publisher is part of a domain.  And if you’ve used SSMS in the past, you’ll know that it doesn’t let you enter domain credentials when selecting integrated authentication.  So how do we solve this problem?  I found this great little solution posted here.

The basic solution is this:

runas /netonly /user:DOMAIN\username ssms.exe

Exchange Calendar Syncing with Exchange Web Services (EWS) API

I’ve been working with the Exchange Web Services managed API on and off for the past year and surprisingly there isn’t all that much documentation.  Sure there is the standard Microsoft documentation (here) but it only gives basic examples.  Which as it turns out worked out for the majority of the functions I need to handle for my project, but there is one problem that the API couldn’t handle.

The project that I’ve been working on is an appointment scheduling application.  The basics are that the application maintains a master calendar of a various number of people and they each specify when they are available for appointments and for what type of appointment.  These appointments are displayed in the program but are also written out to their exchange calendars.  Keeping these appointments in order would be all fine and dandy except the appointment aren’t read-only.

EWS does have a function that will supposedly aid in synchronizing the calendar to the database.  The function called “SyncFolderItems” takes a sync state (which it returns to you when you run the function) and using this sync state information EWS returns any changes to the calendar since that sync.  However it doesn’t seem to work all that well and appears to be quite fragile (it is probably worth noting that I am working against an Exchange 2007sp1 server and not 2010).  The sync might work at first but after some amount of time or some condition it just stops returning any changes at all.  And it’s hard to debug since EWS just tells me no changes happened.

This is the MyAppointment class that is used in the program.

public class MyAppointment
{
    public int Id { get; set; }
    public DateTime Start { get; set; }
    public DateTime End { get; set; }
    public string Subject { get; set; }
    public string Body { get; set; }

    public string ExchangeId { get; set; }
}

The following is the code for using the EWS sync function.

public string SyncChanges(string mailboxId, IEnumerable<MyAppointment> appointments, string syncState)
{
    var service = InitializeService(mailboxId);

    var changeLog = service.SyncFolderItems(
        new FolderId(WellKnownFolderName.Calendar, mailboxId),
        PropertySet.FirstClassProperties, null, 512,
        SyncFolderItemsScope.NormalItems, syncState);

    foreach (var changedItem in changeLog)
    {
        var appt = appointments.Where(a => a.ExchangeId == changedItem.ItemId.UniqueId).FirstOrDefault();

        if (appt != null)
        {
            switch(changedItem.ChangeType)
            {
                case ChangeType.Update:
                    var appointment = (Appointment) changedItem.Item;
                    appointment.Start = appt.Start;
                    appointment.End = appt.End;
                    appointment.Subject = appt.Subject;
                    appointment.Body = appt.Body;

                    // write the change back to exchange

                    break;
                case ChangeType.Delete:

                    var newAppointment = new Appointment(service);

                    newAppointment.Start = appt.Start;
                    newAppointment.End = appt.End;
                    newAppointment.Subject = appt.Subject;
                    newAppointment.Body = appt.Body;

                    // write the change back to exchange

                    break;
                default:
                    break;
            }
        }
    }

    return changeLog.SyncState;
}

However the above code doesn’t work for whatever reason.  Instead I wrote the following function that essentially does the same thing, but only syncs data 2 months ahead.  It takes a few more lines of code but using Linq it makes it easy to do the comparisons.

public void SyncChanges(string mailboxId, IEnumerable<MyAppointment> myAppointments)
{
    // get appt one day before and 1 month out
    var searchFilter = new SearchFilter.SearchFilterCollection(LogicalOperator.And);
    searchFilter.Add(
        new SearchFilter.IsGreaterThanOrEqualTo(AppointmentSchema.Start, DateTime.Now.AddDays(-1)));
    searchFilter.Add(
        new SearchFilter.IsLessThanOrEqualTo(AppointmentSchema.End, DateTime.Now.AddMonths(2)));
    searchFilter.Add(
        new SearchFilter.IsEqualTo(ItemSchema.ItemClass, "IPM.Appointment"));
    // get back 512 results max
    var view = new ItemView(512);
    // make the remote call
    var service = InitializeService(mailboxId);
    var results = service.FindItems(
        new FolderId(WellKnownFolderName.Calendar, mailboxId), searchFilter, view);

    // get the distinct ids from exchange
    var exchangeIds = results.Select(a => a.Id.UniqueId);

    // get the distinct ones we have
    var dbIds = myAppointments.Select(a => a.ExchangeId);

    // find the ids that are in the db but not in exchange
    var missing = dbIds.Where(a => !exchangeIds.Contains(a));

    var newAppts = myAppointments.Where(a => missing.Contains(a.ExchangeId)).Select(
                        a => new Appointment(service) {Start = a.Start
                                                     , End = a.End
                                                     , Subject = a.Subject
                                                     , Body = a.Body});

    // get the exchange objects we do have in the db
    var appts = results.Where(a => !missing.Contains(a.Id.UniqueId) && dbIds.Contains(a.Id.UniqueId))
                       .Select(a => (Appointment)a);

    var changedAppts = new List();

    // find the changed appointments
    foreach (var appointment in appts)
    {
        // get the db appointment object
        var appt = myAppointments.Where(a => a.ExchangeId == appointment.Id.UniqueId).FirstOrDefault();

        // compare the time stamps and determine if we need to make a change
        if (appt != null)
        {
            if (appt.Start != appointment.Start || appt.End != appointment.End)
            {
                // make the changes
                appointment.Start = appt.Start;
                appointment.End = appt.End;
                appointment.Subject = appt.Subject;
                appointment.Body = appt.Body;

                // add it to the list of ones needed to change
                changedAppts.Add((Item)appointment);
            }
        }
    }

    // write the new and updated objects to exchange
}

The end result is that if a user moves or deletes an appointment managed by the program it can on interval go back and sync the calendar.