Mar 06

Configuring SQL Server Kerberos for Double-Hop Authentication

The Requirement

We have one database stored on SQL Server (A), which has some synonyms to tables in SQL Server (B).  We want our .NET 4.5 application (running under IIS) to invoke some queries to move data from tables in SQL Server (A) to SQL Server (B), using the synonyms (so the web application doesn’t need to know about SQL Server (B)).


  • Windows Server 2012 R2

  • SQL Server 2012, services running as a domain service account

  • IIS Application Pool Identity running as a domain service account

  • SQL Server (A) has a linked server to SQL Server (B)

  • Both SQL Servers running named instances

The Problem

When IIS talks to SQL Server (A), it does so using it’s domain service account (as that is the account running the AppPool).  That account has been granted sufficient privileges over the database on SQL Server (A) such that it can happily perform operations on it.

When the application wishes to work on tables residing in a database on SQL Server (B), through the use of the synonyms in SQL Server (A), because it is a different server when SQL Server (A) tries to run commands on SQL Server (B) it has no user identity to pass with the request.  This results in the following exception being thrown from SQL Server (B):

Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

SQL Server (A) has not been granted delegation rights to submit commands to SQL Server (B) using the IIS AppPool’s Identity.

The Solution (Overview)

The solution is to use Kerberos authentication throughout the flow.  When using Windows Authentication in the connection between the IIS application and SQL Server, as indicated by a connection string entry similar to the following:

Data Source=.;initial catalog=MyDb;integrated security=SSPI;

IIS will first attempt Kerberos authentication (if it can), otherwise it will fallback to NTLM authentication (this is seamless to the client, but it can be seen if you run a network trace).

If Kerberos authentication succeeds between the IIS application and SQL Server (A), then provided SQL Server (A) has been given delegation rights over the IIS AppPool Identity account, it can make a subsequent request to SQL Server (B) (when it needs to) using the IIS AppPool Identity account, rather than NT Authority\ANONYMOUS LOGON.

There are six steps in getting this to work:

  1. Configure Service Principal Names (SPNs) for the appropriate services / accounts for SQL and IIS

  2. Check that IIS is authenticating to SQL Server (A) using Kerberos

  3. Grant SQL Server (A) delegation rights for the IIS AppPool Identity account

  4. Grant that account permissions on the SQL Server (B) database as appropriate

  5. Enable DTC Options

  6. Tweak remote queries

The Solution (Steps)

Step 1 – Configure SPNs


  • The domain account that the SQL Server services are running under needs a SPN for the MSSQL service (and several variations for each one)

  • The domain account that the IIS AppPool is running under needs a SPN for each IIS website that will be connecting to the SQL Server


setspn -a domain\sqlsvc-account MSSQLSvc/
setspn -a domain\sqlsvc-account MSSQLSvc/
setspn -a domain\sqlsvc-account MSSQLSvc/host
setspn -a domain\sqlsvc-account MSSQLSvc/host:1433


setspn -a domain\sqlsvc-account MSSQLSvc/host:instanceName
setspn -a domain\sqlsvc-account MSSQLSvc/host:<TCPPORT>
setspn -a domain\sqlsvc-account MSSQLSvc/
setspn -a domain\sqlsvc-account MSSQLSvc/<TCPPORT>


setspn -a domain\apppool-account http/mywebsitehost
setspn -a domain\apppool-account http/

The SQL SPNs will be automatically created if (and only if) the account it is running under has permissions to create the SPNs (which it attempts to do on start up).  In most scenarios this will not be the case, so you can manually add them in as above.

Make sure you check for duplicate SPNs (any duplicates will stop Kerberos Authentication from working):

setspn -x

See the following MSDN article for more details:


By default SQL Server NAMED INSTANCES allocate a TCP port dynamically, so creating the SPN by hand is tricky.  There are two options:

  1. Set a static port (recommended when using clusters)

  2. Grant the SQL service account permissions to create the SPNs itself when the service starts up

The latter option requires an edit via AdsiEdit.msc as follows:

  • Expand the domain you are interested in

  • Locate the OU where the service account resides

  • Right click on the CN=<service account name> and click Properties

  • Click on the Security tab and click Advanced

  • In Advanced Security Settings dialog box select SELF under Permission Entries

  • Click Edit, select the Properties tab

  • Scroll down and tick Allow against:

    • Read servicePrincipalName

    • Write servicePrincipalName

See the following KB article for more information:

Step 2 – Check Kerberos between IIS and SQL Server (A)

Restart IIS and access a page which causes some database traffic to hit SQL Server (A).  You can run the following query on SQL Server (A) to check the authentication method being used by the current active connections:

select session_id,net_transport,client_net_address,auth_scheme from sys.dm_exec_connections

You should see something like this:

Check the auth_scheme column to see what is being used.  This will tell you if you need to recheck the SPNs.  If you’re still having troubles, fire up a network monitor (e.g. Wireshark) on the IIS server and filter for Kerberos traffic.

Step 3 – Grant Delegation Rights

Once SQL Server (A) has been presented with the Kerberos ticket from IIS, it still won’t be able to use those credentials to contact SQL Server (B) until it is explicitly allowed.  There are two approaches to this: one is to allow the SQL service account to delegate credentials to any service; the more secure way is to use constrained delegation whereby we specify exactly which services this account can delegate credentials to.

Open Active Directory Users & Computers, right click on the SQL service account and choose Properties.  After adding the SPNs (step 1) a new tab will appear called Delegation.

  • Select Trust this user for delegation to specified services only

  • Use Kerberos only

  • Click Add, enter the SQL service account name and select both sets of SPNs added

  • Click OK

Step 4 – Grant SQL Permissions

Don’t forget to do this – the account used by the IIS Application Pool needs to be given suitable permissions on SQL Server (B).

Step 5 – Enable DTC Options

On both SQL Servers the Distributed Transaction Coordinator needs configuring to allow remote connections.  Open the DTC properties:

  • Control Panel

  • Administrative Tools

  • Component Services

  • Computers > My Computer > Distributed Transaction Coordinator

  • Right click on Local DTC and select Properties

  • Select the Security tab

  • Enable Network DTC Access, Allow Remote Clients, Allow Remote Administration, Allow Inbound, Allow Outbound, No Authentication Required

  • Click OK

  • This causes the DTC Service to be restarted

See this article for more details:

Step 6 – Tweak the Stored Procedures / Remote Queries

After getting Kerberos authentication fully working I hit another issue to do with SQL spawning nested transactions on the linked tables.  The exception thrown was:

Unable to start a nested transaction for OLE DB provider "SQLNCLI11" for linked server "SERVERXXX". A nested transaction was required because the XACT_ABORT option was set to OFF.

It turns out there’s some more SQL voodoo needed, namely the following statement at the start of each stored procedure we were running:


That did the trick.  See this SO post for more details:

Firewall Requirements

Both SQL Server need an inbound allow rule for the Distributed Transaction Coordinator to execute.  This can be done by enabling the predefined rule in Windows Firewall for Distributed Transaction Coordinator (TCP-In):

This is usually disabled by default.


Enable Kerberos Logging:

Permanent link to this article:

May 25

Solving PPTP VPN Error 619 when behind a TMG 2010 firewall

I was recently configuring a test environment which had a Microsoft Threat Management Gateway (TMG) 2010 firewall between the private network and the Internet.  From a test Windows 7 client I was trying to establish an outbound PPTP VPN – but I kept getting Error 619 “A connection to the remote computer could not be established”.


I knew the VPN connection was OK, as when I ran it from the other side of the TMG firewall it connected straight away.

After digging around a bit I discovered that although I had set up a rule in TMG to allow PPTP requests through (from the Internal network to the External network in my case), there was another setting necessary to enable this to work (which was not obvious).


I found that disabling the PPTP FIlter on the PPTP protocol in TMG 2010 resolved this problem.  To change this setting, do the following:

1. Open Forefront TMG Management console


2. On the right hand side, click on the Toolbox tab, click on Protocols, expand VPN and IPSec, right click on PPTP and click Properties



3. Click on the Parameters tab and uncheck the PPTP Filter option in the Application Filters section:



4. Click OK and apply the change in TMG, then re-test the VPN, it should work now.

I haven’t got to the bottom of the ‘why’ behind this change but I hope it saves someone else the hours it took me to solve this problem.


Permanent link to this article:

Sep 06

Solving ‘An exception occurred in publishing: No such interface supported (Exception from HRESULT: 0×80004002 (E_NOINTERFACE)’ with Visual Studio 2012 RC

So I’m using Visual Studio 2010 RC and loving web deploy as a simple way to publish my projects to different environments.  However a problem cropped up today after installing some web tooling updates:

‘An exception occurred in publishing: No such interface supported (Exception from HRESULT: 0×80004002 (E_NOINTERFACE)’

I couldn’t even open the publish settings dialog to see if anything was wrong in there.  I tried restarting Visual Studio, then running it as Admin, no change.  I eventually found the following steps on a forum which resolved the problem for me:

  1. Run Command Prompt as Administrator
  2. regsvr32 actxprxy.dll
  3. Restart Visual Studio

And hey presto, project publishing was working again!

Hopefully this will be of use to someone else out there banging their head against a brick wall trying to figure this one out.

Permanent link to this article:

Aug 22

Configuring multiple public DHCP IP addresses on a Linksys WRT54G with OpenWrt

I hit a problem the other day whilst trying to map a bunch of public IP addresses (provided by Virgin Business) to various services within the network.  Essentially I’m running a VMWare ESXi server with several web servers on, and I want to use the public IP addresses to expose these servers to the Internet through the business broadband connection.

Rather than splash out on some expensive networking kit, I decided to have a go at hacking with OpenWrt (an open-source mini-Linux router firmware which can run on number low-end network devices).  I decided to plumb for a new Linksys WRT54G, as these are renown for their support of firmware upgrades like OpenWrt (also dd-wrt and tomato to name a few).

The Challenge

When you buy public IP addresses from your ISP, most will hand you a static block which are assigned specifically to your account.  However, some ISPs (Virgin Business in my case), assign your public IPs by DHCP (I initially bought 5).  So every time you connect a different device to the cable modem their DHCP server hands out a different public IP address.  Whilst this seems all very nice, it makes life more difficult when configuring a router to listen on all of those addresses.   DHCP hands out a public IP and stores the hardware MAC address of the device requesting an address in it’s lease table.  The problem with trying to grab more than one address from a single router is that it only has one physical network port (for WAN), and therefore only one hardware MAC address.  The trick here is to create multiple virtual interfaces in the router, each with their own (made up) MAC address, so they can each make a DHCP request to the ISP.

The Hardware

It turns out that there are a plethora of models of Linksys WRT54G, some with different hardware and supporting different firmware features.  The way to check is to turn the router upside down:


(N.B. I have highlighted the area to look at with the red box.)

In my case I had a WRT54GL v1.1, which I bought from eBuyer for £45.

The Firmware

I originally looked at tomato, but the project seems to be languishing and gathering Internet dust.  I settled on OpenWRT because I found the kmod-macvlan package which allows you to create virtual MAC interfaces on the router, which is exactly what I needed.  I followed the installation instructions here (scroll down to the Installing OpenWRT section).

Now, this is important: you must use the bcrm47xx targeted build of OpenWRT to get access to the kmod-macvlan package (this took me a while to figure out).  The one I used came from here:

After flashing your device (either via SSH if enabled, or via the web GUI), I suggest you enable the SSH daemon, life gets much easier that way.


Here’s an overview of my network setup:


OpenWRT comes with a package manager called opkg which is incredibly useful for installing/managing additional packages.

opkg update
opkg install ip
opkg install kmod-macvlan

This will get the latest list of packages from OpenWRT and install ip and kmod-macvlan packages which we need to configure the virtual MAC interfaces.

Next I modified /etc/rc.local to create the virtual MAC interfaces:

# set up virtual mac addresses as aliases on the main WAN i/f eth0.1

ip link add link eth0.1 eth2 type macvlan
ifconfig eth2 hw ether 58:55:ca:23:32:e9

ip link add link eth0.1 eth3 type macvlan
ifconfig eth3 hw ether 5d:a4:02:04:24:0d

ip link add link eth0.1 eth4 type macvlan
ifconfig eth4 hw ether 8C-89-A5-57-80-E7

ip link add link eth0.1 eth5 type macvlan
ifconfig eth5 hw ether 58:4f:4a:df:40:03

ifup -a

# default route
route add default gw dev eth0.1

exit 0

This script configures 4 additional virtual interfaces on top of the main WAN interface (eth0.1), each with it’s own unique MAC address (you can generate a random MAC address using the instructions here.  I’ve added a default route to Virgin’s router to make life easier when it comes to configuring the firewall (you can find out what your’s is by running ifconfig before making any of these changes).

For each new WAN interface I added a section to OpenWRT’s network config (in /etc/config/network):

config 'interface' 'wan2'
    option 'ifname' 'eth2'
    option 'proto' 'dhcp'
    option 'defaultroute' '0'
    option 'peerdns' '0'
    option 'gateway' '

This maps the WAN2 interface onto the eth2 hardware device, and specifies that it should obtain an address using DHCP.  The route and gateway entries are to force all outgoing requests through the main WAN interface (eth0.1).

After saving these changes you’ll need to reboot your router (use reboot –f).  You can then check the status with ifconfig – look at each interface and check they all have public IP addresses.


The next step was to configure some Network Address Translation (NAT) rules in the firewall to forward traffic coming in on certain public IPs to the relevant hosts on my internal network.  This was achieved relatively easily by adding the following sections to OpenWRT’s firewall config (in /etc/config/firewall):

config zone
    option name             wan1
    option network          'wan1'
    option input            REJECT
    option output           ACCEPT
    option forward          REJECT
    option masq             1
    option mtu_fix          1

# forwards from 1st WAN i/f to SP2010 Web01
config redirect
    option src              wan1
    option src_dport        3389
    option proto            tcp
    option dest_ip

The first block configures a firewall zone named wan1 which maps to the wan1 network, with some default rules (e.g. reject all input by default, accept all output by default).  The second block forwards tcp traffic on port 3389 (remote desktop protocol) from wan1 to a local IP of  This happens to be a SharePoint 2010 web server sitting on the ESX host.

Tidying Up

I had one other problem which was that my Linksys box was sitting on the 192.168.0.x network but needed an additional interface to talk to the Virtual Machines on the ESX server.  This was simply achieved by adding an alias section to the OpenWRT network config (in /etc/config/network):

config 'alias'
    option 'interface' 'lan'
    option 'proto' 'static'
    option 'ipaddr' ''
    option 'netmask' ''

After another reboot of the router everything was looking good.

Next Steps

The next thing I want to get working is OpenVPN server running on the Linksys, so I can support remote VPNs into the local network.  Naturally there’s a package for this, but it looks like it needs a bit of configuration, as always.

Happy hacking!

Permanent link to this article:

Feb 04

Claims Proxy – A C# Library for Calling Claims Protected Web Services

The ClaimsProxy library enables you to get a WIF cookie collection for a SharePoint site which is protected by Claims Based Authentication. It assumes that ADFS is configured as the Trusted Identity Token Issuer and that the down-stream identity provider is based on the StarterSTS / IdentityServer project.

In some recent work I needed to call some SharePoint 2010 web services from a client outside of the SharePoint farm.  Using traditional network credentials and Windows Authentication this was a straightforward matter, for example:

ICredentials credentials = new NetworkCredential('username', 'password', 'domain');
SharepointUserGroupsWCF.UserGroupSoapClient client = new SharepointUserGroupsWCF.UserGroupSoapClient();
client.ClientCredentials.Windows.ClientCredential = (NetworkCredential)credentials;
client.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation;
client.Endpoint.Address = new EndpointAddress(webPath + "/_vti_bin/usergroup.asmx");

XElement groupXml = client.GetGroupCollectionFromSite();

However, when the target service is protected by Claims Based Authentication, it’s not so straightforward to call such services.  In my scenario I had a SharePoint 2010 site protected in a web application configured with Claims Based Authentication.  I had configured SharePoint to direct authentication to ADFS (Microsoft Active Directory Federation Server), and then I had a custom claims provider configured based on Dominick Baier’s Identity Server  open source STS product (formerly StarterSTS), however my approach could easily be adapted to work with any number of final claims providers.

Even if you just want the code, read this bit first.

I’ve implemented the code as a .NET 4.0 assembly, although it should be relatively easy to get it to compile under .NET 3.5.  The overall approach works as follows:

  • Use username + password to request a symmetric token from our custom STS (you could get a Windows token here or whatever you are using) – set the realm to that configured for ADFS
  • Use that token to request a bearer token from ADFS – set the realm to the SharePoint site realm as configured in ADFS
  • Use the token from ADFS to authenticate with SharePoint
  • From the resulting authentication response, extract the WIF (Windows Identity Foundation) cookie for authentication (commonly named FedAuth)
  • Return this cookie to the client

I have included a test application in the solution which demonstrates using the returned cookie to make a call to a web service in the SharePoint site.  I’ve left the responsibility of caching the cookie to the client (but included this in the test app).

IMPORTANT NOTE: There is one modification required to the library which I haven’t done yet.  If you read the finer details of WIF you will know that if the SAML token data is too large for a single cookie, then WIF spreads the data over multiple cookies named FedAuth1, FedAuth2 … FedAuthN.  The library assumes that there is only one FedAuth cookie at present – which will be sufficient for most applications – but its worth keeping an eye on if you’re running into troubles.  I will get around to sorting this at some point, but feel free to make a pull request on my Github repository if you get there first.

Download the ClaimsProxy library and sample application from Github

ClaimsProxy is straightforward to use:

// configure our SPServiceRequestor.
var requestor = new SPServiceRequestor
    DobstsEndpoint = "",
    DobstsUsername = "",
    DobstsPassword = "password",
    DobstsAdfsRealm = "",
    AdfsEndpoint = "",
    SharepointRealm = "",
    SharepointSiteUrl = "",
    IgnoreSslValidation = true,
    DebugMode = true,
    DebugEventCallback = (data) =>
     // your debug function here

string spCookieRaw = requestor.GetCookie();

N.B.: DobSts is our own implementation of the StarterSTS project.

Check out the sample application for a full example of how to use the library.

I’d welcome any comments / feedback on this – hopefully it will be of some use to others out there, it certainly has been to me.

Permanent link to this article:

Older posts «