IISExpress – Error description: Access is denied. (0x80070005)

Today, I spent my entire work day trying to find the solution to this error.

When I would launch a web project from Visual Studio, IIS Express would fail to load.  The error reported was:

Failed to register URL "http://localhost:10000/" for site "Portal3" application "/". Error description: Access is denied. (0x80070005)

I started down the obvious track of checking permissions.  IISExpress would operate just fine when run as Administrator, but not as my normal user.  I was an administrator on the local machine, so that didn’t make any sense.

The only thing I had done that morning was use Disk Cleanup to delete all my temp files and archived error reports.  I checked %temp%\iisexpress.  I checked my *.config files in Documents\IISExpress over and over.  I restored configs from backups.  I read many potential solutions, some as crazy as not being able to use a user name with “bg” in it.

I tried an older iteration of the web application.  It worked.  I tried other web projects in the solution.  They worked.  At this point, I was the only developer in my team who had a single web project that wouldn’t launch.

Then, late as usual, I finally remembered the Windows Event Viewer.  IISExpress was logging these two events:

Event ID: 2269: The worker process for app pool ‘Clr4IntegratedAppPool’, PID=’7536′, failed to initialize the http.sys communication when asked to start processing http requests and therefore will be considered ill by W3SVC and terminated.  The data field contains the error number.

EventID 2276: The worker process failed to initialize correctly and therefore could not be started.  The data is the error.

That lead me down a totally different troubleshooting path.  While doing so, I set up a new website in IIS on port 8085.  Oddly, my new websites and application pools didn’t refresh after being created, which led me to think there was even more wrong with my workstation.  An IISRESET resolved that problem and I was able to browse to the web project using IIS.

So IIS would work on port 8085 but IISExpress would not work on port 10000.  And IIS would work for any other project on seemingly any other port. 

In a comment on one of the web posts that I read, I saw this command:

netsh http add urlacl url=http://localhost:10000/ user=everyone

I’d only ever used NETSH maybe once or twice for some obscure reason, but the inclusion of “ACL” in the command was encouraging.  Amazingly, the command worked!  Although I was happy, I was also disappointed that I wouldn’t be able to see what the value was previously so I could find out the root cause.

So I ran:

netsh http show urlacl

And that displayed a bunch of entries that included these as well (User and SDDL changed):

Reserved URL            : http://127.0.0.1:10000/
    User: Me 
        Listen: Yes
        Delegate: No
        SDDL: D:(A;;GX;;;S-1-5-21-)

Reserved URL            : http://127.0.0.1:10001/
    User: Me
        Listen: Yes
        Delegate: No
        SDDL: D:(A;;GX;;;S-1-5-21-

Reserved URL            : http://127.0.0.1:10002/
    User: Me
        Listen: Yes
        Delegate: No
        SDDL: D:(A;;GX;;;S-1-5-21-)

Where did these come from?  I ran netsh http delete urlacl url=http://localhost:10000/ and removed the entry and confirmed IISExpress stopped working.  Then I ran netsh http delete urlacl url=http://127.0.0.1:10000/ and IISExpress started working again.

Something had added ACL entries for ports 10000-10002 that were conflicting with my web project, which was trying to run on port 10000 using IISExpress.  How did they get there?  I looked in Add/Remove Programs to get a clue as to what could have added these entries.  Who’s to blame?  The Azure SDK.  Azure Storage Emulator uses ports 10000-10002 and creates reservations for them.  This was installed during my attempts at getting command-line Web Publishing to work.

I never would have known anything about this unless I had read the Azure SDK documentation.  The error message I was given said nothing about port conflicts.  nothing led me down that path at all.  And it’s entirely possible no one else would have this problem unless they were running IISExpress on port 10000.

But the important takeaway from this is that NETSH will allow you to create reservations for port numbers that may conflict with other applications.

MSBuild error MSB4057: The target “Package” does not exist in the project.

Every three weeks we release an update to our websites and web services.  To make this release easier, I created a batch file that would build the projects and deploy each one to our four web servers.  The last few times I tried this, my batch file failed running this command:

c:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe C:\Projects\Portal\Portal1.vbproj /nologo /verbosity:minimal /t:Package /p:Configuration=Release

C:\Projects\Portal\Portal1.vbproj : error MSB4057: The target "Package" does not exist in the project.

So then each time, I would have to manually deploy the web sites with One-Click Publish.  Today, I decided to resolve this problem.

Because the Internet has a great memory and because the nature of deploying Visual Studio has changed frequently and recently, it was very difficult to determine what was the current best way to do the automated task that I wanted. 

The first promising solution was to install a project called “CommunityTasks” and import it into your project.  Did that.  Didn’t work.  Read further and learned I needed to install the Azure SDK (This would haunt me for a long time).  Still, none of the example command lines worked.

Then I learned that some publishing settings had been moved from the project file to the publishing profiles.  Fine, I could handle that.  I created a new publishing profile that created a package.  However, I couldn’t figure out how to execute that publishing profile from the command line.

In the end, I decided I would create the deployment packages manually in VS with One-Click Publish, then execute a batch file that would run the package’s deploy.cmd files for each project to each server.  This would actually result in a faster deployment because I wouldn’t have to wait for each project to build in the deployment batch file.  And using the /k switch, I could launch multiple deployments at once.  For example:

start cmd /k "Portal1.bat"
start cmd /k "Portal2.bat"
start cmd /k "Portal3.bat"
start cmd /k "Portal4.bat"

And each batch file for the project would install to each server:

c:\Build\Package\portal1.deploy.cmd /Y /M:http://server01/MSDeployAgentService /U:deploy /P:deploy -enableRule:DoNotDeleteRule
c:\Build\Package\portal1.deploy.cmd /Y /M:
http://server02/MSDeployAgentService /U:deploy /P:deploy -enableRule:DoNotDeleteRule
c:\Build\Package\portal1.deploy.cmd /Y /M:
http://server03/MSDeployAgentService /U:deploy /P:deploy -enableRule:DoNotDeleteRule
c:\Build\Package\portal1.deploy.cmd /Y /M:
http://server04/MSDeployAgentService /U:deploy /P:deploy -enableRule:DoNotDeleteRule

Still Not Giving In

I’m still using MS Money.  And I’ve come across a couple of instances of it beginning to lose compatibility with modern systems.  So now, I’ve actually started creating workarounds for them.

I’ve used a variety of online accounts in my many years.  I’ve used HSBC, Capital One (the non-360 variant), Sallie Mae, and most recently, Ally.  At this point, I’ve decided Ally is getting all my business and I’ve been in the process of moving accounts into new Ally subaccounts, which is very easily done on their part.  Just today, I discovered the transaction download feature.  There’s no MS Money OFX option, but I don’t think Money existed anymore when Ally came on the scene.  Anyway, there is a Quicken download, so that is what I use.

MS Money is awesome in that it supports QFX files, however, the standard format of the file must have moved on in time, so now Money throws up when it tries to process the file.  After a bunch of trial and error, I discovered that the reason for the error is a node in each transaction entry for the check number: <CHECKNUM>0</CHECKNUM>. Once you strip that node out, the file imports just fine. 

In another case, my 401k provider, Transamerica, recently revamped their transaction download and their QFX files have a different problem.  The file headers look like:

OFXHEADER: 100
DATA: OFXSGML
VERSION: 102
SECURITY: NONE
ENCODING: USASCII
CHARSET: 1252
COMPRESSION: NONE
OLDFILEUID: NONE
NEWFILEUID: NONE

But there is a space after the colon, which causes MS Money to report the file is corrupt.  The header should look like:

OFXHEADER:100
DATA:OFXSGML
VERSION:102
SECURITY:NONE
ENCODING:USASCII
CHARSET:1252
COMPRESSION:NONE
OLDFILEUID:NONE
NEWFILEUID:NONE

So I made a script that will alter the QFX file and then launch the Money importer.  All you have to do is drag the QFX file onto the VBS file and you’re good to go.  If you want to get clever, you can put the script in your SendTo folder or map it as a default application.

Without further adieu, this is the content of the script:

dim fso,f,s,shell

set fso=CreateObject("scripting.filesystemobject")

set f=fso.OpenTextFile(WScript.Arguments(0),1)
s=f.ReadAll
f.close
set f=nothing

set f=fso.OpenTextFile(WScript.Arguments(0),2)
f.Write Replace(Replace(s,"<CHECKNUM>0</CHECKNUM>",""),": ",":")
f.Close
set f=nothing

set fso=nothing

Set shell = CreateObject("Shell.Application")
shell.ShellExecute "C:\Program Files (x86)\Microsoft Money Plus\MNYCoreFiles\mnyimprt.exe",  WScript.Arguments(0)
set shell=nothing

And then, you can import QFX files from Ally or Transamerica (and maybe some others that have the same problems) into MS Money without any errors.

Two-Factor Authentication Primer

I recently implemented two-factor authentication into a web app and since it was a new concept for me, I thought it would be good to explain the highest conceptual level of this process.  As with a lot of new things, there’s some terminology to learn and there’s a need to understand how all the pieces fit together.

First, what’s it take to integrate this with an existing profile login?  You need a new database field and a bit of extra code for opting in and out of the two-factor authentication.  Ideally, you’ll need a library for generating a QR code, too.

Before I get too much into it, these are some of the elements of the process.  There are three pieces of data involved:

  • Shared Secret: This element is stored in the database with the user profile and is never exposed outside your application.
  • Secret Key: This is an encoded version of the Shared Secret.  It is given to the user by your application and the user enters it into their authenticator application.
  • Code: The numeric value generated by the authenticator application.  This changes every minute.

In brief, your application and the authenticator application both use the current time plus the Secret Key to generate a Code.  If they match, the user is authenticated.

To implement this, you would modify your user profile page to provide a button to enable two-factor.  When the button is clicked, you create a a random Shared Secret and save it to their profile.  You use that Shared Secret to generate and return the Secret Key.  The user puts that Secret Key in their authenticator app and the opt-in is complete.

When the user logs in to your application, if they have a Shared Secret set in their profile, they are prompted to enter the Code from their authenticator app.  Your application compares that Code to the Code it generates itself, using the Secret Key (built from the Shared Secret).  If it is the same, the user is logged in.

It really is simple.  The only thing that isn’t clear, but can be found with some moderate Internet searching is the URL to embed in the QR code.  That URL is: otpauth://totp/{0}?secret={1}, where {0} is the name of the profile to use (either your application or the user’s username or both) and {1} is the Secret Key.  Authenticator apps allow manual entry of Secret Keys, so if you don’t provide a QR code, it’s still workable, just a bit tedious.

Some of the other pieces you’d need are functions to reset the Shared Secret or clear it, if the user wanted to opt-out.  This is simple user account maintenance.  With a simple implementation, you could blank out the Shared Secret on a “forgot password” action.  With more sensitive data, you may want a second code to allow a password reset.  The big concern is users who have lost their phone or wiped out their authenticator application entries.

Because two-factor authentication is so simple and is such a low-impact to existing user profile data structures (relative to oAuth), plus the fact it can be opt-in, it’s really a no-brainer to add it to your applications.

PHP Hacked Site

While doing a search for something innocuous, I found a search result that was very out of place.  The domain was nothing related to what I was searching for, and the text abstract was, to say the least, spammy.  Although I know you’re not supposed to click things like that, I figure I’m pretty secure, so I clicked it.

I was immediately shown a page that said my download would start in 0 seconds, then I was prompted to download an EXE file.  Uh huh.  I browsed to the root domain and it really was a legitimate website.  So now, I wanted to figure out how this happened.  I navigated to the hacked page and I didn’t get any download prompt.  I went back to the search results and clicked again – I got the download prompt.  Hmmm.  More attempts and sometimes the site would send me to a dead page.

image

I looked very hard at the source code and couldn’t find the script that was being injected, but I could see there was a comment <!–counter–> that was getting replaced with the download redirect.  I did a site search on Bing and found many, many, many pages on their website that were suspect.  Also, I saw actual website pages that were in PHP.

So, I had to conclude that the website had a hacked version of PHP, and if that was compromised, the server could do anything it wanted, including checking for referrers and replacing tags in the source code files.  The best I could do was email them and let them know they were hacked and that they had to have their webmaster fix it for them.

Upon further research, it looks like it was a Joomla exploit from a couple of years ago.  I passed that info along and hopefully the website owners can make the updates needed (and clean up all the extra pages).

SSRS ReportViewer NullReferenceException on Dispose

I recently assisted on troubleshooting an error in a utility application where an exception was being thrown on the dispose of a Microsoft.Reporting.WebForms.ReportViewer.  The environmental conditions were pretty specific, so it’s possible you’d never see something like this in your environment.  But if you do, here’s how you can work around it.

The specific condition is that we have a shared library of code for both desktop and web applications.  One of the functions in that library takes some parameters for an SSRS report and returns a byte array for a rendered PDF of the report.  Because the library initially was used exclusively by the website, the WebForms version of the ReportViewer was used.  As time went on, the library was used by desktop apps and windows services.  That’s when the trouble began.

So, if you are using a WebForms.ReportViewer in a desktop application, you may get this exception when disposing the instance.  Digging into the decompiled code for the ReportViewer control suggested it was because there was no HttpContext available.  For us, the long-term fix was clear: use the WinForms version of the ReportViewer.  In the short term though, adding this line of code resolved the error:

If HttpContext.Current Is Nothing Then HttpContext.Current = New HttpContext(New HttpRequest(IO.Path.GetRandomFileName, "http://www.google.com", ""), New HttpResponse(IO.TextWriter.Null))

This created an HttpContext where there was none before, and the ReportViewer instance was able to be disposed without an error.

In Defense of Whitespace

There is a trend that I’ve been seeing recently that I find somewhat disturbing.  It primarily manifests itself with C# programmers, who also tend to be really indignant when questioned about it.  The issue is whitespace in source code.
When I see a big block of C# code and all the text is crammed into as little space as possible, it is very difficult to read.  This means that it takes longer to figure out what the code.  This means that it takes me longer to do my job, making me more expensive to my employer or client.  What benefit does this have with efficiency?  Further, although unrelated to this post, is the use of significant coding shortcut expressions in C#, which the developers praise as so efficient and elegant, but make the code barely intelligible.
Whenever I ask one of the coders about this, their answer is that whitespace is for people and compilers don’t need whitespace.  That response baffles me because it sounds like they are arguing my point.  The whole point of source code is to be human-readable.  But, somehow in their mind, it sounds like whitespace slows the application down.
Years ago, I read an excellent book, Developing User Interfaces for Microsoft Windows.  Although it’s rather outdated now, it had a lot of good advice in it, and one of the tips was to make use of whitespace for code clarity.  Up until that time, I didn’t pay much attention to blank lines and I had a different indenting scheme then what was the standard.  But then I changed both of these and my code became immediately more readable.
Although it maybe sounds a bit obvious, I demand whitespace in my code because I am  a writer and an avid reader.  I need paragraph breaks to indicate to me when a topic is changing or a new thought is starting.  If you treat writing a program like writing a story, your code will be much easier to understand; and to echo the C# developers, the compiler won’t care.
Aside from the line breaks between methods and between logical code blocks within methods, I like to put all my variable declarations at the beginning of the method.  It introduces you to all the characters in the chapter and gives you an idea of how complex the plotline of the chapter is.  This is also out of fashion with the current declare-just-before-use style.
One of my other structural designs that goes against the current fashion is to put my properties at the beginning of the class instead of at the end.  This is the same structure as a UML diagram, so I’m not sure why that design practice changed.  With methods, I try to put all my event handlers first, then order the methods by their access level (public, friend, protected, private).  Finally, I put methods that are called by other methods later in the class, so if you need to reference a called method, you almost always scroll down instead of up to find the called method.  This is made easier since private methods are last in the class.

Reading this post without whitespace is what it is like to read source code without whitespace.  It sucks.

TFS 2010 to TFS 2012 Express

Last weekend I did a full upgrade on my computers to Windows 8.  Along with that, came the upgrade of Visual Studio 2012 and SQL Server 2012, and Team Foundation Server 2012.  My plan was to have a completely fresh development environment, with no legacy 2010, 2008, 2005 versions.

This was a good plan, but I had one reservation.  In this year, I converted from Visual Source Safe to TFS 2010, and now I was going to have to upgrade again to TFS 2012.  Everyone understands how to manage a big file system structure like VSS, but TFS might be a mystery.  Actually, it’s much easier.  It’s just two SQL databases.  So I backed up those databases and did my full, fresh install.

Now I’m ready to install TFS 2012, but what about my data?  I’d seen many blogs and articles describing the upgrade process.  Every one said, uninstall TFS 2010, then install TFS 2012.  But I didn’t want to have any TFS 2010 bits on the new install.  So in order to get the database instance to restore my TFS 2010 data, I installed TFS 2012, then uninstalled it.  Then I restored my TFS 2010 data into the SQLEXPRESS instance TFS created.  Finally, I reinstalled TFS 2012, selecting the “Upgrade” option, which converted my TFS 2010 data to the new schema. 

Sounds like a roundabout solution but it worked without any problems and without resorting to older product installs.

Adventures In Installing

A pretty common feature in applications is the ability to automatically update when a new version is available.  Of course many designs have been created to address this, from having pre-launchers, to ClickOnce, to web services, to notifications, and on and on.  There’s plenty of ways to solve the problem, each with its own set of benefits and drawbacks.

The design I recently faced was one where the app would launch, check the database for a new version and if one existed, would display a notification, launch the installer and exit the application.  A couple of things were less than ideal with this.  First, the installer would be the standard interactive install, so you got used to hitting Enter 3 times to jump through the screens.  Second, after install, you’d need to relaunch the application.  So I decided to try and resolve these issues.

The first step I thought would be easy.  I would launch the install using MSIEXEC and use the /QR to make the install non-interactive.  This seemed really nice until a bug surfaced where the /QR switch would not perform an uninstall of the previous version, resulting in multiple entries in the Add/Remove Programs list.  This bug was reported back in 2010 and with VS 2012 coming out, I guess it will never be resolved.  With a lot of trial and error, since there seems to be no workaround for the problem on the Internet, I found that the /PASSIVE switch will uninstall the old version, so that was my new plan.  However, with /PASSIVE, there is no notification the install is complete.  That’s fine for now, because I was going to make the app relaunch after completion.

After a lot of study on custom actions, I got the application to launch after the install.  Now I have a problem where the application is being launched as SYSTEM instead of the current user.  This would make the database access fail.  More trial and error, because there doesn’t seem to be any info on this issue either, and I found that when you choose the InstallAllUsers=true option, MSIEXEC runs as SYSTEM, which makes sense since it would need access to write to the All Users sections.  But if you set InstallAllUsers=false, then MSIEXEC runs as the current user, which then would launch the application as the current user and all would be well.  That is, if you’re willing to give up the All Users install.

Spam Gallery–Diliver Your Package

A colleague and I were talking at lunch recently about spam and how clever spam and phishing attempts are getting.  But still, there is still so far to go.  One of the biggest failures of spammers is their sheer stupidity.  If they’re going to use a template from a well-known company, why do they insist on changing the wording of the email?  These people don’t have a grasp of the American English language, much less what professional business correspondence looks like.

image2

Starting with the misspelling in the subject, the horrible grammar continues throughout the message.  The point of the email though, is to inform that one of their trucks “is burned tonight”.  This is not a typical business email.

And this spam email suffers from the same problem as every other one.  How did you get my email address? How do you know the package is mine?  I have to assume that people believe that everyone just knows your email address somehow.  Anyone sending you a package seems to implicitly know your email, since UPS and FedEx are sending me package delivery failure email notices all the time.