Editing SharePoint Pages Using Visual Studio And WebDAV

I blog in a few different places.  I have my personal blog, my professional blog, and I maintain a blog at work to inform and educate co-workers.  At work, the blog is hosted on the company’s SharePoint server, which is fine.  I am still able to use Windows Live Writer and with it, the Insert Code plug-in.  My other two blogs use WordPress.

The Insert Code plug-in is invaluable to me because it does nice color coding of the text.  As part of that feature, it inserts a CSS style block into your post.  SharePoint doesn’t play well with this.  It tries to, but fails.  The intent is good.  SharePoint wraps your whole post in a div and gives it a class with a random name, then it rewrites the CSS styles so the classes will be scoped to only that containing div class.  Pretty smart way of encapsulating the styles.

Unfortunately, it fails on two points.  First, the containing div’s class is not like class=”123456789abcd”, it always precedes the class name with “ExternalClass”, so you get class=”ExternalClass123456789abcd”.  The rewritten CSS does not have any mention of “ExternalClass”.

The second mistake is in the rewritten CSS.  Your post will have a style block rewritten similar to:

123456789abcd h1 {color: red;}
123456789abcd h3 {color: green;}
123456789abcd .bold {font-weight: bold;}

Do you see the problem?  The class is 123456789abcd on the div (actually ExternalClass123456789abcd), but the stylesheet doesn’t scope it to that class.  Those are html tags it’s defining.  The stylesheet is looking for html tags of <123456789abcd>.

So, what can you do about this?  My solution was to put the stylesheet right in the template page so all the posts will be able to use those classes – that’s why I created this post.  The problem is, I couldn’t find anything in the SharePoint control panel to add a custom stylesheet (unlike WordPress, right?).  There was an option to edit the page using SharePoint Designer, so I installed Designer, only to find out the administrator didn’t allow editing using Designer. 

Other places on the Internet suggested adding a Content Editor web part and put the style sheet in there.  I tried it half-heartedly and gave up because it seemed way too “hack-y”.  But while doing so, it reminded me of something I used to know about SharePoint, that you could browse the site’s files using WebDav (assuming you had permissions).

So, what I did was map a network drive (a command found in many places in Windows Explorer) and gave it the URL of my SharePoint site.  Right away, I got an Explorer window with the template files.  I edited Default.aspx and Post.aspx and added my stylesheet.  The formatting was immediately applied.  Then I edited all my previous posts and removed my inline style code blocks to save space and reduce complexity.  Everything works now.

Windows 10 Groove Music – Zune On


With the release of Windows 10 comes a new music application, Groove Music.  Groove Music has Zune DNA, except that it loses any Windows Media Player (WMP) or Zune syncing capability.  The assumption is that the mobile phone is the new MP3 player and file copy is the preferred method of syncing.  For better or worse, this is the new normal.

Groove is much closer to the aesthetics of Zune than of WMP, and aside from the lost syncing capability and the toned-down Now Playing screen, it’s a reasonable Zune replacement – as a music player.  Syncing, well… not as much.  You have your usual views: Artist, Album, Song, Playlists, plus Albums for an Artist.  Genre view is missing.  Typing will expand the hamburger menu and put the text in the search box, proving immediate search.  Of course you have the Marketplace, to purchase and download more content.

Technical Details

Groove is a successor to Zune, although the outward branding does not hint at it.  The code library is called ZuneMusic and is found at %userprofile%\AppData\Local\Packages\Microsoft.ZuneMusic_8wekyb3d8bbwe.  In the subfolder LocalState you will find plenty of runtime details.  LocalState has a folder for the database, which is in ESENT format.  There are the ImageCache and imageStore folders that hold album artwork and artists photos from the Zune web services.

As far as the database is concerned, it seems to be similar if not exact to the old Zune database, which was in SQL Server Compact format.  The most common tables would be: tblAudioAlbum, tblPerson, tblGenre, and tblTrack to hold the music metadata and tblFolder and tblFile to hold the physical file references.

There are utilities and libraries to work with ESENT databases.  One is called ESENT Workbench.  If you do want to play around with the database, you may need to do a repair on it because it may not have shut down cleanly.  You can run the command “esentutl.exe /p EntClientDb.edb” to clean up the files for reading.

The Groove Music app uses another couple of packages extensively: Microsoft.Windows.CloudExperienceHost and Microsoft.Windows.ContentDeliveryManager, but probably not for primary functions.  The majority of data is likely pulled from the Zune web services, since the entries in the matadata database tables have references to GUIDs that, when used with the web services, retrieve the proper artist or album info.

The database for Groove Music appears to sync between computers, which makes a lot of sense for cloud-based music, but may get hairy when different machines have different local files.

Extension Ideas

What can be done with having access to the music database?  My impetus for this research was trying to change the Now playing slideshow to use all the artist pictures like Zune does, instead of a single album picture.  I haven’t gotten that far yet.  But some ideas for apps would be:

  • Statistic app showing most played artists, albums, songs, genres
  • Smart playlist generator based on statistics
  • Statistics on files: sizes, bitrates, dates, and something intriguing called FingerprintData
  • Utility to clean, purge, delete, export library
  • Post Now Playing, Recently Played to social media
  • Create a smart sync utility that utilizes the library’s metadata with file copy

MSBuild error MSB4057: The target “Package” does not exist in the project.

Every three weeks we release an update to our websites and web services.  To make this release easier, I created a batch file that would build the projects and deploy each one to our four web servers.  The last few times I tried this, my batch file failed running this command:

c:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe C:\Projects\Portal\Portal1.vbproj /nologo /verbosity:minimal /t:Package /p:Configuration=Release

C:\Projects\Portal\Portal1.vbproj : error MSB4057: The target "Package" does not exist in the project.

So then each time, I would have to manually deploy the web sites with One-Click Publish.  Today, I decided to resolve this problem.

Because the Internet has a great memory and because the nature of deploying Visual Studio has changed frequently and recently, it was very difficult to determine what was the current best way to do the automated task that I wanted. 

The first promising solution was to install a project called “CommunityTasks” and import it into your project.  Did that.  Didn’t work.  Read further and learned I needed to install the Azure SDK (This would haunt me for a long time).  Still, none of the example command lines worked.

Then I learned that some publishing settings had been moved from the project file to the publishing profiles.  Fine, I could handle that.  I created a new publishing profile that created a package.  However, I couldn’t figure out how to execute that publishing profile from the command line.

In the end, I decided I would create the deployment packages manually in VS with One-Click Publish, then execute a batch file that would run the package’s deploy.cmd files for each project to each server.  This would actually result in a faster deployment because I wouldn’t have to wait for each project to build in the deployment batch file.  And using the /k switch, I could launch multiple deployments at once.  For example:

start cmd /k "Portal1.bat"
start cmd /k "Portal2.bat"
start cmd /k "Portal3.bat"
start cmd /k "Portal4.bat"

And each batch file for the project would install to each server:

c:\Build\Package\portal1.deploy.cmd /Y /M:http://server01/MSDeployAgentService /U:deploy /P:deploy -enableRule:DoNotDeleteRule
c:\Build\Package\portal1.deploy.cmd /Y /M:
http://server02/MSDeployAgentService /U:deploy /P:deploy -enableRule:DoNotDeleteRule
c:\Build\Package\portal1.deploy.cmd /Y /M:
http://server03/MSDeployAgentService /U:deploy /P:deploy -enableRule:DoNotDeleteRule
c:\Build\Package\portal1.deploy.cmd /Y /M:
http://server04/MSDeployAgentService /U:deploy /P:deploy -enableRule:DoNotDeleteRule

Still Not Giving In

I’m still using MS Money.  And I’ve come across a couple of instances of it beginning to lose compatibility with modern systems.  So now, I’ve actually started creating workarounds for them.

I’ve used a variety of online accounts in my many years.  I’ve used HSBC, Capital One (the non-360 variant), Sallie Mae, and most recently, Ally.  At this point, I’ve decided Ally is getting all my business and I’ve been in the process of moving accounts into new Ally subaccounts, which is very easily done on their part.  Just today, I discovered the transaction download feature.  There’s no MS Money OFX option, but I don’t think Money existed anymore when Ally came on the scene.  Anyway, there is a Quicken download, so that is what I use.

MS Money is awesome in that it supports QFX files, however, the standard format of the file must have moved on in time, so now Money throws up when it tries to process the file.  After a bunch of trial and error, I discovered that the reason for the error is a node in each transaction entry for the check number: <CHECKNUM>0</CHECKNUM>. Once you strip that node out, the file imports just fine. 

In another case, my 401k provider, Transamerica, recently revamped their transaction download and their QFX files have a different problem.  The file headers look like:


But there is a space after the colon, which causes MS Money to report the file is corrupt.  The header should look like:


So I made a script that will alter the QFX file and then launch the Money importer.  All you have to do is drag the QFX file onto the VBS file and you’re good to go.  If you want to get clever, you can put the script in your SendTo folder or map it as a default application.

Without further adieu, this is the content of the script:

dim fso,f,s,shell

set fso=CreateObject("scripting.filesystemobject")

set f=fso.OpenTextFile(WScript.Arguments(0),1)
set f=nothing

set f=fso.OpenTextFile(WScript.Arguments(0),2)
f.Write Replace(Replace(s,"<CHECKNUM>0</CHECKNUM>",""),": ",":")
set f=nothing

set fso=nothing

Set shell = CreateObject("Shell.Application")
shell.ShellExecute "C:\Program Files (x86)\Microsoft Money Plus\MNYCoreFiles\mnyimprt.exe",  WScript.Arguments(0)
set shell=nothing

And then, you can import QFX files from Ally or Transamerica (and maybe some others that have the same problems) into MS Money without any errors.

Two-Factor Authentication Primer

I recently implemented two-factor authentication into a web app and since it was a new concept for me, I thought it would be good to explain the highest conceptual level of this process.  As with a lot of new things, there’s some terminology to learn and there’s a need to understand how all the pieces fit together.

First, what’s it take to integrate this with an existing profile login?  You need a new database field and a bit of extra code for opting in and out of the two-factor authentication.  Ideally, you’ll need a library for generating a QR code, too.

Before I get too much into it, these are some of the elements of the process.  There are three pieces of data involved:

  • Shared Secret: This element is stored in the database with the user profile and is never exposed outside your application.
  • Secret Key: This is an encoded version of the Shared Secret.  It is given to the user by your application and the user enters it into their authenticator application.
  • Code: The numeric value generated by the authenticator application.  This changes every minute.

In brief, your application and the authenticator application both use the current time plus the Secret Key to generate a Code.  If they match, the user is authenticated.

To implement this, you would modify your user profile page to provide a button to enable two-factor.  When the button is clicked, you create a a random Shared Secret and save it to their profile.  You use that Shared Secret to generate and return the Secret Key.  The user puts that Secret Key in their authenticator app and the opt-in is complete.

When the user logs in to your application, if they have a Shared Secret set in their profile, they are prompted to enter the Code from their authenticator app.  Your application compares that Code to the Code it generates itself, using the Secret Key (built from the Shared Secret).  If it is the same, the user is logged in.

It really is simple.  The only thing that isn’t clear, but can be found with some moderate Internet searching is the URL to embed in the QR code.  That URL is: otpauth://totp/{0}?secret={1}, where {0} is the name of the profile to use (either your application or the user’s username or both) and {1} is the Secret Key.  Authenticator apps allow manual entry of Secret Keys, so if you don’t provide a QR code, it’s still workable, just a bit tedious.

Some of the other pieces you’d need are functions to reset the Shared Secret or clear it, if the user wanted to opt-out.  This is simple user account maintenance.  With a simple implementation, you could blank out the Shared Secret on a “forgot password” action.  With more sensitive data, you may want a second code to allow a password reset.  The big concern is users who have lost their phone or wiped out their authenticator application entries.

Because two-factor authentication is so simple and is such a low-impact to existing user profile data structures (relative to oAuth), plus the fact it can be opt-in, it’s really a no-brainer to add it to your applications.

SSRS ReportViewer NullReferenceException on Dispose

I recently assisted on troubleshooting an error in a utility application where an exception was being thrown on the dispose of a Microsoft.Reporting.WebForms.ReportViewer.  The environmental conditions were pretty specific, so it’s possible you’d never see something like this in your environment.  But if you do, here’s how you can work around it.

The specific condition is that we have a shared library of code for both desktop and web applications.  One of the functions in that library takes some parameters for an SSRS report and returns a byte array for a rendered PDF of the report.  Because the library initially was used exclusively by the website, the WebForms version of the ReportViewer was used.  As time went on, the library was used by desktop apps and windows services.  That’s when the trouble began.

So, if you are using a WebForms.ReportViewer in a desktop application, you may get this exception when disposing the instance.  Digging into the decompiled code for the ReportViewer control suggested it was because there was no HttpContext available.  For us, the long-term fix was clear: use the WinForms version of the ReportViewer.  In the short term though, adding this line of code resolved the error:

If HttpContext.Current Is Nothing Then HttpContext.Current = New HttpContext(New HttpRequest(IO.Path.GetRandomFileName, "http://www.google.com", ""), New HttpResponse(IO.TextWriter.Null))

This created an HttpContext where there was none before, and the ReportViewer instance was able to be disposed without an error.

In Defense of Whitespace

There is a trend that I’ve been seeing recently that I find somewhat disturbing.  It primarily manifests itself with C# programmers, who also tend to be really indignant when questioned about it.  The issue is whitespace in source code.
When I see a big block of C# code and all the text is crammed into as little space as possible, it is very difficult to read.  This means that it takes longer to figure out what the code.  This means that it takes me longer to do my job, making me more expensive to my employer or client.  What benefit does this have with efficiency?  Further, although unrelated to this post, is the use of significant coding shortcut expressions in C#, which the developers praise as so efficient and elegant, but make the code barely intelligible.
Whenever I ask one of the coders about this, their answer is that whitespace is for people and compilers don’t need whitespace.  That response baffles me because it sounds like they are arguing my point.  The whole point of source code is to be human-readable.  But, somehow in their mind, it sounds like whitespace slows the application down.
Years ago, I read an excellent book, Developing User Interfaces for Microsoft Windows.  Although it’s rather outdated now, it had a lot of good advice in it, and one of the tips was to make use of whitespace for code clarity.  Up until that time, I didn’t pay much attention to blank lines and I had a different indenting scheme then what was the standard.  But then I changed both of these and my code became immediately more readable.
Although it maybe sounds a bit obvious, I demand whitespace in my code because I am  a writer and an avid reader.  I need paragraph breaks to indicate to me when a topic is changing or a new thought is starting.  If you treat writing a program like writing a story, your code will be much easier to understand; and to echo the C# developers, the compiler won’t care.
Aside from the line breaks between methods and between logical code blocks within methods, I like to put all my variable declarations at the beginning of the method.  It introduces you to all the characters in the chapter and gives you an idea of how complex the plotline of the chapter is.  This is also out of fashion with the current declare-just-before-use style.
One of my other structural designs that goes against the current fashion is to put my properties at the beginning of the class instead of at the end.  This is the same structure as a UML diagram, so I’m not sure why that design practice changed.  With methods, I try to put all my event handlers first, then order the methods by their access level (public, friend, protected, private).  Finally, I put methods that are called by other methods later in the class, so if you need to reference a called method, you almost always scroll down instead of up to find the called method.  This is made easier since private methods are last in the class.

Reading this post without whitespace is what it is like to read source code without whitespace.  It sucks.

They Thought They Could Stop Me.

I was working with a DataGrid that had a ButtonColumn in it.  I had a need to set the CommandArgument for this button.  Did you know there is no way to set a CommandArgument for a ButtonColumn?


I was all prepared to grab that control and set that property in the ItemDataBound event, but it doesn’t seem to exist.  Most people would resort to a template column, stick a button in it and work on that control.  Problem was, I was doing everything in code with no markup.  That adds a little complexity to that alternative.

Setting a simple breakpoint in the ItemDataBound event, I looked a little closer at what I had to work with in the Immediate window.

? e.Item.Controls(3)
     System.Web.UI.WebControls.TableCell: {System.Web.UI.WebControls.TableCell}
? e.Item.Controls(3).Controls.Count
? e.Item.Controls(3).Controls(0)
{Text = “Edit”}
     System.Web.UI.WebControls.DataGridLinkButton: {Text = “Edit”}

Hmm, it’s a DataGridLinkButton.  And it does have a CommandArgument property.  So let’s find that control and cast to that type and set that property.


I see.  So this type is not user-accessible.  It doesn’t even show up in the Object Browser.  However, it does show up in Reflector, and I can see that it inherits from LinkButton, which is public.  Let’s whip up a quick function to find that control and return a LinkButton for setting the CommandArgument.

Woah, slow down a bit.  This is a ButtonColumn and it can be a link button, command button, or an image button.  If we have a function specifically for LinkButton, it’s going to potentially error out.  In the typical, excellent design of the .NET framework, these three button types are all related using the IButtonControl interface, which has properties for CommandName and CommandArgument.  So by using the interface instead of the exact type, we’re being safe and future-proofing ourselves against other button types.

Private Function GetButtonColumnButton(row As DataGridItem, commandName As String) As IButtonControl
    Return RecurseRowControls(row, commandName)
End Function

Private Function RecurseRowControls(ctl As WebControl, commandName As String) As IButtonControl
    Dim btn As IButtonControl

    ' loop through embedded controls
    For Each c As WebControl In ctl.Controls
        btn = TryCast(c, IButtonControl)

        ' if it is a button and the command name matches, return it
        If btn IsNot Nothing AndAlso String.Compare(btn.CommandName, commandName, True) = 0 Then
            Return btn
        End If

        ' if the control has child control, search them for the button
        If c.HasControls Then
            btn = RecurseRowControls(c, commandName)
            If btn IsNot Nothing Then Return btn
        End If

    ' no button found
    Return Nothing

End Function

And just like that, we can now have access to the button’s properties like Text, CommandName, CommandArgument, and CausesValidation.  That’s some great stuff there.

Private Sub Grid_ItemDataBound(sender As Object, e As DataGridItemEventArgs) Handles Me.ItemDataBound
    Dim btn As IButtonControl

    If e.Item.ItemType = ListItemType.AlternatingItem Or e.Item.ItemType = ListItemType.Item Then
        btn = GetButtonColumnButton(e.Item, "Edit")
        btn.CommandArgument = "something like an ID"
        btn.Text = "specific text label"
    End If
End Sub


Saving Objects–Simple, Not Difficult

XML can be a wonderful thing.  Storing XML in SQL Server can be wonderful, too.  It gives you a place to store a lot of data in one field and is especially useful if that data is considered one unit of data.  A good example of this would be storing user preferences, for example, a stripped down table like:


So that’s one field: preferences.  Now let say that we have some items we want to store, like a few general preferences and some dialog boxes where the user checked the “Do not show this message again” option:

Public Class UserPreferences

    Private _UserID As Integer
    Private _General As GeneralPreferences
    Private _DoNotShowMessages As DoNotShowMessagesPreferences

    Public Property General As GeneralPreferences
            If _General Is Nothing Then _General = New GeneralPreferences
            Return _General
        End Get
        Set(ByVal value As GeneralPreferences)
            _General = value
        End Set
    End Property

    Public Property DoNotShowMessages As DoNotShowMessagesPreferences
            If _DoNotShowMessages Is Nothing Then _DoNotShowMessages = New DoNotShowMessagesPreferences
            Return _DoNotShowMessages
        End Get
        Set(ByVal value As DoNotShowMessagesPreferences)
            _DoNotShowMessages = value
        End Set
    End Property

    Public Sub New()

    End Sub

    Private Sub New(ByVal userID As Integer)
        _UserID = userID
    End Sub

    Public Class GeneralPreferences
        Public Property ShowSplashScreen As Boolean
        Public Property UseAlternateColorScheme As Boolean
        Public Property NumberOfItemsinGrids As Integer
    End Class

    Public Class DoNotShowMessagesPreferences
        Public Property HideNoResultsMessage As Boolean
        Public Property HideCloseConfirmation As Boolean
    End Class
End Class

This gives us two nested classes that store our values in nice groups, held in a class that allows us to access those nested classes and set the values.  Now we want to create an XML document that we can save and load in SQL.  So saving would be something like:

    Public Sub SaveXML()
        Dim doc As Xml.XmlDocument
        Dim parentNode As XmlNode
        Dim childNode As XmlNode
        Dim parms As New Generic.List(Of SqlClient.SqlParameter)

        doc = New XmlDocument
        doc.LoadXml("<UserPreferences />")

        parentNode = doc.DocumentElement.AppendChild(doc.CreateNode(XmlNodeType.Element, "General", doc.NamespaceURI))

        childNode = parentNode.AppendChild(doc.CreateNode(XmlNodeType.Element, "ShowSplashScreen", doc.NamespaceURI))
        childNode.InnerText = Me.General.ShowSplashScreen.ToString

        childNode = parentNode.AppendChild(doc.CreateNode(XmlNodeType.Element, "UseAlternateColorScheme", doc.NamespaceURI))
        childNode.InnerText = Me.General.UseAlternateColorScheme.ToString

        childNode = parentNode.AppendChild(doc.CreateNode(XmlNodeType.Element, "NumberOfItemsInGrids", doc.NamespaceURI))
        childNode.InnerText = Me.General.NumberOfItemsInGrids.ToString

        parentNode = doc.DocumentElement.AppendChild(doc.CreateNode(XmlNodeType.Element, "DoNotShowMessages", doc.NamespaceURI))

        childNode = parentNode.AppendChild(doc.CreateNode(XmlNodeType.Element, "HideNoResultsMessage", doc.NamespaceURI))
        childNode.InnerText = Me.DoNotShowMessages.HideNoResultsMessage.ToString

        childNode = parentNode.AppendChild(doc.CreateNode(XmlNodeType.Element, "HideCloseConfirmation", doc.NamespaceURI))
        childNode.InnerText = Me.DoNotShowMessages.HideCloseConfirmation.ToString

        With parms
            .Add(New SqlClient.SqlParameter("@UserID", _UserID))
            .Add(New SqlClient.SqlParameter("@Preferences", doc.OuterXml))
        End With

        SqlHelper.ExecuteNonQuery(CONN_STRING, CommandType.Text, _
            "insert userpreferences(userid,preferences) values(@UserID,@Preferences)", parms.ToArray)

    End Sub

and would give us an xml document saved to the server like:


We’d also need a corresponding LoadXML method to read and parse out the XML and set the internal values.  That seems pretty good and it prevents us from having to modify the database table every time we add a new preference.

But, in a more critical and more annoying way, we have to modify the SaveXML and LoadXML methods every time we add a new preference.  Not only that, but we have to compensate for previously-saved versions of the XML, saved before new preferences were added, otherwise we’ll get errors when we try to read nodes that don’t exist.  This is a path of misery and spaghetti.

Don’t be discouraged.  There is an easy way.  In fact, it’s so easy it’s near unbelievable.  You add this code once and never change it.  Add all the preferences/properties you want and the code works with whatever is available.  It uses the XMLSerializer to do all the work.

First, all the classes need to be marked as <Serializable()>.  Then we create a method to instantiate the UserPreferences class:

    Shared Function GetInstance(ByVal userID As Integer) As UserPreferences
        Dim up As UserPreferences
        Dim ser As XmlSerializer
        Dim dt As DataTable
        Dim dr As DataRow

        dt = SqlHelper.ExecuteDataset(CONN_STRING, CommandType.Text, _
            "select preferences from userpreferences where userid=" & userID).Tables(0)

        If dt.Rows.Count <> 0 Then
            dr = dt.Rows(0)
            ser = New XmlSerializer(GetType(UserPreferences))
            up = TryCast(ser.Deserialize(New IO.StringReader(CStr(dr("Preferences")))), UserPreferences)
            up._UserID = userID

            up = New UserPreferences(userID)

        End If


        Return up

    End Function

Then we add a public method to save the class:

    Public Sub Save()
        Dim ser As XmlSerializer
        Dim xmlData As IO.StringWriter
        Dim parms As New Generic.List(Of SqlClient.SqlParameter)

        ser = New XmlSerializer(GetType(UserPreferences))
        xmlData = New IO.StringWriter
        ser.Serialize(xmlData, Me)

        With parms
            .Add(New SqlClient.SqlParameter("@UserID", _UserID))
            .Add(New SqlClient.SqlParameter("@Preferences", xmlData.GetStringBuilder.ToString))
        End With

        SqlHelper.ExecuteNonQuery(CONN_STRING, CommandType.Text, _
            "insert userpreferences(userid,preferences) values(@UserID,@Preferences)", parms.ToArray)

    End Sub

Seriously, that’s it.  Three lines to turn an object into XML.  Two lines to create an object from XML.  Missing and extraneous properties get skipped with no errors, which lets you change the classes whenever and however you need.  And the structure of the XML is the same as shown previously, with child classes/properties as nested elements.

In a test app, you can load, change, and save preferences:


With the preferences in an object, the UI code is extremely simple – one of the great benefits to using objects:

    Dim _prefs As UserPreferences

    Private Sub cmdLoad_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdLoad.Click
        _prefs = UserPreferences.GetInstance(CInt(txtUserID.Text))

        With chkGeneralPreferences
            .SetItemChecked(0, _prefs.General.ShowSplashScreen)
            .SetItemChecked(1, _prefs.General.UseAlternateColorScheme)
        End With

        txtNumberOfItemsInGrid.Value = _prefs.General.NumberOfItemsInGrids

        With chkDoNotShowMessages
            .SetItemChecked(0, _prefs.DoNotShowMessages.HideNoResultsMessage)
            .SetItemChecked(1, _prefs.DoNotShowMessages.HideCloseConfirmation)
        End With

    End Sub

    Private Sub cmdSave_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdSave.Click
        With _prefs.General
            .ShowSplashScreen = chkGeneralPreferences.GetItemCheckState(0) = CheckState.Checked
            .UseAlternateColorScheme = chkGeneralPreferences.GetItemCheckState(1) = CheckState.Checked
        End With

        _prefs.General.NumberOfItemsInGrids = CInt(txtNumberOfItemsInGrid.Value)

        With _prefs.DoNotShowMessages
            .HideNoResultsMessage = chkDoNotShowMessages.GetItemCheckState(0) = CheckState.Checked
            .HideCloseConfirmation = chkDoNotShowMessages.GetItemCheckState(1) = CheckState.Checked
        End With


    End Sub

Save That Email. As an Email.

Applications typically send a lot of emails.  At least, they should, since it’s a good, archive-able method for communication and confirmation.  Archive-able for the receiver, sure, but what about the sender – the application?  You have a couple of options: you can parse out all the fields of the email and stick them in a database or you can CC or BCC the email to a mailbox for archival.  What if you want to store the actual email that was sent?  You want the actual EML file.

Wouldn’t it be nice if the MailMessage object had a .SaveAs method?  It doesn’t.  Well then, wouldn’t it be nice if the SmtpClient object had a .SaveTo property?  It does, kind of.  And by using those properties, we can capture the actual EML file that is typically sent to a SMTP server for delivery.  Once we have that EML file data, we can save it to a file or to a database or wherever.

Quite simply, you need to set two properties on the SmtpClient object: DeliveryMethod and PickupDirectoryLocation.  This tells the SmtpClient to write the EML to a specified folder and the mail server will monitor that folder and pull it from there.  This is the code:

    Private Function GetEmailBytes(ByVal eml As Mail.MailMessage) As Byte()
        Dim smtp As Mail.SmtpClient
        Dim customFolderName As String
        Dim fileBytes() As Byte

        customFolderName = IO.Path.Combine(My.Computer.FileSystem.SpecialDirectories.Temp, Guid.NewGuid.ToString)


        smtp = New Mail.SmtpClient
        With smtp
            .Host = "localhost"
            .DeliveryMethod = Mail.SmtpDeliveryMethod.SpecifiedPickupDirectory
            .PickupDirectoryLocation = customFolderName
        End With

        fileBytes = IO.File.ReadAllBytes(New IO.DirectoryInfo(customFolderName).GetFiles.First.FullName)

        IO.Directory.Delete(customFolderName, True)

        Return fileBytes

    End Function

To explain the extra legwork involving directories, the SmtpClient writes the EML file with a GUID as a filename.  This prevents emails from overwriting each other.  In a multi-user environment though, how could we know which file was just written so we read the right file? So to be sure what we’re reading is our email, we create a unique folder to write the EML to and we know there will be only one file in there to read.

The GetEmailBytes method just returns the bytes of an EML file.  That’s the most flexible way to work with the data.  If you want to save that to another place with another name, just use IO.File.WriteAllBytes, like so:

    Private Sub SaveMessage()
        Dim eml As Mail.MailMessage
        Dim bytes() As Byte

        eml = New Mail.MailMessage
        With eml
            .To.Add(New Mail.MailAddress("anyone@home.com"))
            .From = New Mail.MailAddress("nobody@home.com")
            .Subject = "test save"
            .Body = "this is the body"
        End With

        bytes = GetEmailBytes(eml)

        IO.File.WriteAllBytes(IO.Path.Combine(My.Computer.FileSystem.SpecialDirectories.Desktop, "output.eml"), bytes)


    End Sub