Sending Azure Service Health Alerts to O365 Teams

As part of the Azure implementation that I’m leading for my employer, I decided to find a way to post service health alerts to a channel in Teams in a “friendly” format.  The result is posted on GitHub with a quick walkthrough of how to implement this in your own environment.

It works by setting up a service health alert which uses a webhook to trigger an Azure function.  The function, which is written in Powershell, parses the content of the incoming webhook (which must be using the Common Alert Schema) to build an outgoing payload using the MessageCard format.  The MessageCard generated is then sent to Teams via an incoming webhook connector for the target channel.

This is an example of what the resulting message looks like in Teams.

SampleHealthAlertCard.PNG

 

Azure Hub-and-Spoke Networking: The Missing Manual Pages

I’ve spent the last couple of years designing and building out the Azure environment for my employer.  It’s been an amazing journey and I’ve learned a huge amount along the way, especially with respect to networking.  I’ve also uncovered a couple of aspects of Azure networking with peered VNETs which I feel are very poorly documented and also have significant design implications if you’re building a peered hub-and-spoke network environment within your Azure space.

This post outlines what I’ve found, details the impact that it has on the design and provides some guidance on what a correct implementation looks like with those behaviors in mind.

Background: Hub-and-Spoke networking in Azure

Hub-and-spoke networking is a topology design where a single “hub” VNet is connected to one or more “spoke” VNets via VNet peering.  In this design the spokes typically are not peered to each other.  This is becoming a fairly common design which has a number of advantages.  For example, it allows all of the networks to share an external connection that is established from the hub such as an ExpressRoute link or VPN S2S tunnel.  The connections from hub to spokes are also extremely fast because the data flow is made entirely within the Azure network fabric and doesn’t need to pass through any compute instances for processing as it does with a VNet-to-VNet VPN link.

A Virtual Network Peering is a durable connection which needs minimal attention after it’s created and doesn’t require deploying any infrastructure to manage, unlike a VPN gateway for example.  Peering must be initiated from both VNets for data flow to be established.  For a walkthrough on how to create a VNet peering, see <this link>.

In the diagram below, “A” and “C” are spoke VNets and “B” is the hub VNet.  The hub VNet has a gateway in it with a connection to an external network

hubandspokenetworkdiagram

Gotcha #1:  Peering relationships are NOT TRANSITIVE

When creating environments such as the one above, it seems natural for the person looking at it to assume that connections from one spoke to another are made by relaying the traffic through the hub’s VNet.  However, this is not the case because VNet peering relationships are not transitive.

In other words, traffic from a host on spoke VNet A has no route to a host in spoke VNet C unless the two spokes are directly peered with each other – sometimes but not always a desirable configuration – or routing “intelligence” is somehow added inside the hub VNet B to forward traffic arriving from one spoke to the other spoke.

Gotcha #1 Workaround, part 1:  Add the part that Microsoft (literally!) leaves out

Microsoft’s documentation on implementing a hub/spoke network topology seems to always include a diagram like the one below which shows a virtual appliance (NVA) in the hub VNet but it’s never clearly explained what the role of this device is:  to provide routing between the spoke VNets.

MSNVADiagram

There are two pieces to making traffic between spokes flow cleanly:  First, you need to deploy something to provide the routing.

In my case, we have a requirement that traffic between subnets within Azure must pass through a firewall for inspection.  We are using a Palo Alto virtual appliance in our hub VNet for this purpose and, fortunately, these have built-in intelligence to provide the missing routing functionality and PA’s documentation on how to configure their virtual firewalls in Azure has the correct setup instructions to make things work as you expect with respect to routing.

Gotcha #1 Workaround, Part 2 (and introducing gotcha #2)

Simply adding a device with routing capability to the hub VNet does not completely close the gap with respect to making traffic flow smoothly from one spoke to another.

What is required is for the spoke subnets which need to talk to each other to have a route table entry which specifies that traffic destined for other spoke subnets uses the IP address of the virtual appliance as the “next hop”.

In the diagram above, subnet A needs to have a route table entry which states that traffic for subnet C’s IP block has a “next hop” address of the appliance in VNet B and subnet C needs a route table entry which sends traffic to subnet A’s IP range to the same “next hop” IP.

In a large environment with lots of subnets, the route tables can accumulate numerous entries but the same route table is used on all subnets so the table’s definition can be kept in an ARM template to make deploying and updating the tables relatively safe and easy.

Wow, that’s nasty…

You’re probably thinking right now that simply applying a default route entry to the route tables on every subnet would be a lot easier than specifying each internal subnet separately.

It’s completely logical to assume that you can simply throw a 0.0.0.0/0 à <virtual appliance IP> as the single entry on each subnet and let things roll, but in reality that doesn’t work because of gotcha #2.

Gotcha #2:  Peering relationships take priority over default routes between peered VNets

This is another aspect of VNET peering which needs to be documented much more clearly by Microsoft.

Buried inside the third “note” block on Microsoft’s tutorial on creating a hybrid network environment with the Azure firewall (under Prerequisites) is the following quote:

“Traffic between directly peered VNets is routed directly even if a UDR points to Azure Firewall as the default gateway. To send subnet to subnet traffic to the firewall in this scenario, a UDR must contain the target subnet network prefix explicitly on both subnets.”

This has implications far beyond the context in which this statement appears.  It means that using default routes in UDR’s on peered VNets is pointless because traffic between peered VNETs that does not have a specific destination specified in a UDR listed will always flow through the peering relationship to the other VNET even if a default-route UDR entry is present.

Make sure to plan with this behavior in mind for your design as well.  This is why, in the routing examples above, simply putting a 0.0.0.0/0 UDR on each of your spoke subnets will not achieve the desired result!

Strangely, this note is the only location that I have found which describes this behavior.  It’s not mentioned on the page describing VNET routing or the one on Azure VNet Peering as far as I can tell.

Experimentation is highly recommended!

I built an isolated replica of our VNet environment which matched the diagram at the top of this page:  a hub VNet, two spoke VNets, an “external” VNet, a PA firewall and the necessary peering and VPN connections to get the topology working correctly.

I then put VM’s on at least one subnet in each VNET and did a number of tests to see which VM’s could ping which other, initially with no route tables at all and then with various route table configurations to confirm that the behavior I was seeing aligned with the documentation.

Use the Network Watcher’s “next hop” tool to verify that traffic from a particular source is taking the path that you expect to its destination.

 

 

 

Interacting with REST API’s using Powershell (and a trick for keeping stored credentials safe)

I recently gave a presentation for Hartford Powershell Meetup group where I went through some examples of interacting with REST API’s using Powershell.  This was a good intro to the basics of what a REST API looks like, a couple of different ways that you deal with authentication and some simple examples of using a REST API to do something useful.

The overall talk was broken into four sections:

The first section was a quick overview of what a REST API is at a very high level and what they look like.  The demo that went with this used a very simple API that implements operations involving decks of cards called, appropriately, the Deck of Cards API created by Chase Roberts.  This is a spiffy little API that makes a perfect demo vehicle and I’m glad I found it.

My second section was a bit of a detour and discussed how you can stash credentials in a PSCredential object and export it to disk using Export-Clixml.  While the resulting file is straight XML and can be viewed with a text editor, the only way to get the original PSCredential object back in is if Import-Clixml is used on it by the same user on the same computer as it was created.  I’ve used this as a quick way to stash credentials because the file is an unusable crypto-blob if it’s removed from the computer or if a different user tries to re-import it.  This is useful in many situations as a quick and dirty way to keep credentials safe without having to resort to more sophisticated solutions like key vaults.

The third section discussed how to interact with a REST API that used what I think of as “simple” authentication.  This category includes API’s that require you to provide a single identifier or an identifier/secret combination with every call so that the API provider knows that you are an authorized user — or maybe they just want to track you.  This is also relatively simple to implement and I used a free API from Currencylayer.com to demonstrate getting somewhat-close-to-kind-of-near-real-time exchange rates between the US dollar, the Euro and the UK Pound.

My last section jumped a couple of levels up in complexity.  In this section I showed how you can deploy an ARM template into your Azure subscription with only REST API calls.  This demo covered several things at once in that it used my stored-credentials trick to retrieve the application ID and secret, performed an OAuth login using them and then put the necessary REST calls together to create a resource group and then deploy the Simple Windows VM template from Microsoft’s Azure Quickstart templates library on GitHub.  Along the way the code walks through the process for contacting the login API to get a bearer token and then shows how to use that bearer token on subsequent calls to the management API where you do all the work.  This get-a-bearer-token flow a is very common authentication model for REST API’s that are doing anything important as it provides a pretty high level of security for the authentication.

I am grateful to the Meetup group for inviting me.  I think the actual presentation went very smoothly despite this being the first time I was delivering it.  I’m going to hang on to this one because it will probably be useful in other contexts as well!

Here is a link to the slide deck that I used and a ZIP file containing the scripts for the four demos.  The demo scripts were run using the StartDemo module to help step through them a line at a time.

 

Using Teams from a browser with 3rd-party cookies blocked

Here’s a quick one that should help some folks.

We recently enabled Office365 teams for a group of testers in our environment.  A couple of our users, who use Linux as their primary desktop OS, noted that running with 3rd party cookies blocked in their web browser prevents Teams from loading successfully.

Working with Microsoft, we identified a group of exceptions that can be added to Firefox to allow Teams to load up with a 3rd-party cookie block in place.  Here they are!

https://.asm.skype.com
https://login.microsoftonline.com
https://.teams.microsoft.com
https://.infra.lync.com
https://.teams.skype.com
https://.sfbassets.com
https://.skypeforbusiness.com

Make sure to enter them exactly as listed.

Linux desktop users are restricted to the web interface only since there is no desktop client for Linux like there is for other platforms.  If you agree that there should be one, please go vote for this uservoice item to encourage Microsoft to put one out.

Enabling or Disabling Specific Services Within Your Office365 License using Powershell

As I’ve discussed in previous posts, an Office365 “License”, which Microsoft refers to as an AccountSkuID, can be conceptualized as a bundle of services which make up that license offering.  For example, in the educational tenant that I am working with at the moment, the license “STANDARDWOFFPACK_STUDENT” can grant a person access to a number of services, such as Exchange Online, Sharepoint Online, OneDrive, the Office web apps, Skype for Business, and a few other things.

A frequent request I’ve had is for a way to easily turn on or off specific services for a user or a set of users.  This is easy enough to do via the Office365 admin portal by flipping the toggles on the screen, but that obviously won’t scale.  Also, the trick is to make sure that you toggle the setting for the one service plan you are looking for while leaving all of the other service plans in their current states.

The approach that I’ve settled on to manipulate a single service plan’s status within an Office365 license is as follows:

  • Read the target user’s object from AAD using get-msoluser.
    $userObject = get-msoluser -UserPrincipalName $userPrincipalName
  • Create a hashtable of ServicePlan:ProvisioningStatus values for all of the service plans in the user’s current license.
    $plans = @{}
    $userObject.licenses.servicestatus | % { $plans.add($_.serviceplan.servicename, $_.provisioningstatus) }
  • Look for a key in the hashtable you just created which matches the name of the service plan that we are looking for and set it to the desired value.  In this example, we are disabling the service plan by changing it from “Success” to “Disabled” in the hashtable.
    if ($plans.get_item($targetServicePlan) -eq "Success") {
        $plans.set_item($targetServicePlan,"Disabled")
    }
  • Use the values in the updated hashtable to build a new licenseOptions object with the updated set of disabled service plans.
    The New-msolLicense opject seems to be very picky about the value passed to the disabledPlans parameter.  The only way that I have found which works consistently is to create a list (array) with each item in the list being a service plan name.  Some sources seem to suggest that a comma-separated string will work but I haven’t had much luck with that.

    $disabledPlans = @()
    $plans.Keys | % { if ($plans.get_item($_) -eq "Disabled") { $disabledPlans += $_ } }
    if ($disabledPlans) {
        $licenseOptions = new-msolLicenseOptions -AccountSkuId $baseLicense -DisabledPlans $disabledPlans -verbose
    } else { # If there is nothing on the list of disabled services
        $licenseOptions = new-msolLicenseOptions -AccountSkuId $baseLicense
    }
  • Use Set-msoluserlicense to apply the updated license information to the target user.
    get-msoluser -UserPrincipalName $userObject.userprincipalname | Set-MsolUserLicense -LicenseOptions $licenseOptions

You can find a pair of Powershell scripts named enable-O365ServicePlan.ps1 and disable-O365ServcicePlan.ps1, which demonstrate this approach, at my GitHub repo.  The scripts perform a series of checks to make sure that the target user is in the correct state before any changes are made.

A limitation of these scripts is that they only handle cases where the target user has a single license assigned to them.  While a single license is the most common configuration, users may also have more than one license assigned and the scripts do not handle that use case.

Using Powershell custom tables to get more useful license usage information from O365

When you want to check the number of licenses in your O365 environment, you use the command get-msolaccountsku, which returns a set of information on the licenses that are part of your environment.  This shows the total number of licenses, the number that have been assigned to users (“consumed”) and the number of licenses that are in a warning state.

While working with a customer that has a number of active SKU ID’s, I needed to improve on this to add a column which has a count of unassigned licenses for easy reference, so here’s a Powershell script which is also a simple example of how you can use custom table formats to generate custom output on the fly.

Most of the format entries are a straight replica of what comes back from the get-msolaccountsku command but the line highlighted in red does a calculation to show the number of unassigned licenses for that SkuID.  The expression element of a table format is in the form of a script block so you can do some really interesting stuff by inserting whatever code you want.  Inside the block, the $_ variable refers to the object that the table is processing at that particular moment

# get-o365licensecounts.ps1
#
# Outputs license information for O365 including a column showing # of unassigned licenses by SKU
#
# Format string to define the table
$f = @{expr={$_.AccountSkuID};label="AccountSkuId"},
 @{expr={$_.ActiveUnits};label="Total"},
 @{expr={$_.ConsumedUnits};label="Consumed"},
 @{expr={$_.activeunits-$_.consumedunits};label="Unassigned"},
 @{expr={$_.WarningUnits};label="Warning"}

# output the info using the format we just defined.
# sorted by active units -descending.
Get-MsolAccountSku | sort activeunits -desc | ft $f -auto

Default output from get-msolaccountsku:..

… and with the script using the custom format…:

Much better.

For more information on how custom tables work, see this TechNet article.

Want to log on to Office365 with your email address instead of your UPN? AlternateLoginId function with ADFS on WS2012R2 is just what you’ve been waiting for.

[ UPDATE 27-FEB-2015:  Added “Known Issues” Section and link to KB article. ]
[ UPDATE 13-JUN-2014 with some additional information about infrastructure requirements. ]

A few weeks ago, Microsoft announced that an interesting new capability has been added to ADFS if you use WS2012R2.  The new function is called “Alternate Login ID” and allows you to configure your ADFS server to treat the value entered in the username field not only as a UPN or domain\username but also to perform an LDAP query for that value against a specified attribute across one or more AD forests to identify which AD has a user object which matches.  If you have a multiforest environment with Office365 and/or don’t like the idea of having to change your UPN’s to use federated AuthN with Office365, this is exactly what you’ve been waiting for. The primary goal here is to remove a common complaint about using ADFS with Office365, which is an assumption on the part of Office365 that the userPrincipalName value in your AD is the same as a person’s UPN in Office365.  For most customers that I have worked with on implementing Office365 with federated AuthN, this has required changing the UPN’s of users who will use Office365 services which is a relatively low-risk action but still presents execution challenges. The new function is enabled by running a command like this one:

Set-AdfsClaimsProviderTrust -TargetIdentifier "AD AUTHORITY" -AlternateLoginID mail -LookupForests contoso.com,fabrikam.com

This command specifies what the attribute name is that should be used — mail is recommended — and the list of AD forests that the lookup should be performed against.  I’ve found that you need to specify the forest root domain if the target AD is a multidomain environment even if the users that you’re looking for are in a subdomain. The ADFS server(s) must be able to reach Global Catalog servers in the target forest so make sure that your A records for the global catalogs (gc._msdcs.contoso.com) are correct. This document has lots of details about how this works and a nice flow chart of how authentication is performed with AlternateLoginId enabled, but what essentially happens is:

  1. User provides username and password strings to ADFS
  2. ADFS performs an LDAP query against the AD forests provided to see if any of them has a user where the specified user attribute (like “mail”) matches the username value provided by the user:
    1. IF one and only one AD responds with a matching user object, ADFS proceeds with authentication against that user object.
    2. IF no match is found in any AD, ADFS tries again, treating the username string provided as a UPN or domain\username combination.
    3. IF more than one AD responds with a match, the authentication fails and an error message is logged.

A new claim is returned by ADFS called http://schemas.microsoft.com/ws/2013/11/alternateloginid which contains the alternate login ID.

HOWEVER, turning on AlternateLoginId is not enough by itself to make things work with Office365 and ADFS!  You still need to make sure that Office365 UPN’s are configured correctly and also make a configuration change to the claim rules created in ADFS for Office365 to make everything line up. Office365 UPN must match the value that ADFS is sending for the alternateLoginId In order to log in to Office365, the federation service needs to send a claim containing the userPrincipalName (UPN) of the user.  The default configuration for ADFS is to simply send the UPN of the on-premise user to Office365, which is why you need to make sure that the UPN in AD matches their Office365 UPN.

The default behavior of the Dirsync tool is to set the UPN of a user in the cloud to match their Active Directory UPN so everything works fine if your AD UPN’s use routable domain names and you use an unmodified ADFS environment. HOWEVER, with AlternateLoginId enabled, ADFS will be sending the value of the specified attribute — usually “mail” — as the UPN so we need to make sure that users in the cloud have their UPN matching that attribute and not their active directory UPN.  While it is possible — but not supported — to tweak the configuration of Dirsync to map these attributes, Dirsync may not be able to make this change because there are limitations on the ability to change a user’s UPN in Azure AD.

ADFS Claim Rule Change required: You must also update ADFS to send the value of the mail attribute as the UPN value instead of sending the userPrincipalName.  To do this, open the first claim rule for Office365 on ADFS and change the default rule from this:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]
 => issue(store = "Active Directory", types = ("http://schemas.xmlsoap.org/claims/UPN", "http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID"), query = "samAccountName={0};userPrincipalName,objectGUID;{1}", param = regexreplace(c.Value, "(?<domain>[^\\]+)\\(?<user>.+)", "${user}"), param = c.Value);

… to this (changed value is in red):

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]
 => issue(store = "Active Directory", types = ("http://schemas.xmlsoap.org/claims/UPN", "http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID"), query = "samAccountName={0};mail,objectGUID;{1}", param = regexreplace(c.Value, "(?<domain>[^\\]+)\\(?<user>.+)", "${user}"), param = c.Value);

ImmutableID note:  If you are using a custom identity sync solution to Office365 such as FIM with the Azure Active directory Connector and your implementation requires you to provide your own ImmutableID value, you will need to update the value for the immutableId claim sent by ADFS as well as the UPN claim.

Infrastructure and connectivity requirements.  (Added 13-JUN-2014)

  • The “lookupdomain name(s) specified must point to the forest root domain, even if the domain that the target users are in is a subdomain.
  • You must be able to reach a server that is a global catalog in the target forest root domain on port 389 in order for the LDAP lookup to succeed.
  • Make sure that all DNS SRV records are in good shape for all AD’s in play.

Prerequisites:  KB2919355, which is a major update for WS2012R2, adds the new capability.  Also, you must install KB2919422 first.

References for additional detail:  Configuring Alternate Login ID and another detailed description of how AlternateLoginId works.

Known Issues:  (Added 27-FEB-2015)
There are a set of known issues that occur when the UPN of the user in AzureAD/O365 doesn’t match the actual UPN of the on-premise user that is associated with it.  Applications which make their own direct calls to AD after authenticating to AAD, such as the desktop Lync client, are most likely to be affected.  This can result in multiple authentication popups being presented to the user where the user must enter their on-premise identity — either domain\username or actual on-premise UPN — to proceed.  For more information on this, see this KB article.

Configuring Cisco WebEx Meeting Server to work with ADFS 2.0+

Like so many other things I’ve written about, this is another example of where I was unable to find a solid set of instructions online about how to do something and had to assemble a working solution from a number of fragments spread across vendor-provided information, blog posts and cries for help posted in online forum threads.  Hopefully this can spare at least a few others from having to go through the same thing.

This procedure has been used to create a system that worked on the “first try” so I know that it works.  It’s possible that this process could be further refined with some additional testing.

This post is targeted to the on-premise version of the Cisco WebEx meeting server, not the hosted (SaaS) version.  I believe that most of what is here should be applicable to the hosted version but there are apparently some differences in the configuration screens that are used for the hosted version.

In this case, It’s assumed that you have an existing ADFS setup (version 2 or 3) which is working properly.  If you’re not confident about this, make sure that all is well before proceeding.

Before you begin, you need to capture some information about your ADFS setup:

  • Export the public key for the Token Signing certificate that your ADFS setup is using and save it to a file.  This can be done via the Certificates MMC snapin.  IMPORTANT:  The certificate must be exported in base64 format, not the default DER format.

  • Capture the Federation Metadata for your ADFS environment to a file as well.  The easiest way to do this is to go to the metadata URL for your ADFS server (usually https://adfs.contoso.com/FederationMetadata/2007-06/FederationMetadata.xml) via a web browser.  What you’ll get back is a blob of XML that your browser probably won’t display properly.  Even if the page appears to be blank, choose “view source” for the page and you should see all of the XML.  Save that to a file.

Once you’ve got those two files (the public key for the token signing certificate and the metadata XML for your ADFS setup), the process starts on the WebEx side…

  1. Import the public key for the signing certificate into WebEx using the “Import Certificate” button under “SSO IdP certificate” on the SSO configuration screen.
  2. On the “Federated Web SSO” configuration page, import the metadata file from ADFS using the button labeled “Import SAML Metadata”.  This will populate some of the fields on the configuration screen for you.
  3. Review and update the fields on the WebEx SSO settings page so they match the list below.  Some of these are already filled in for you based on the ADFS metadata file.
  • SP Initiated should be selected (at the top) and not “IdP Initiated”
  • Target Page URL Parameter name should be TARGET
  • SAML Issuer (SP ID) should be your WebEx URL service name (https://mywebex.contoso.com)
  • Issuer for SAML (IdP ID) should be your ADFS service name (https://adfs.contoso.com)
  • Customer Service SSO Login URL should be populated with an endpoint for your ADFS service like https://adfs.contoso.com/adfs/services/trust
  • NameID Format should be “Unspecified” (drop-down menu)
  • AuthnContextClassRef : Paste in the string below, replacing whatever is already in the box.  Make sure that no line breaks sneak in during the copy/paste process.
  • urn:federation:authentication:windows;urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
  • Single Logout should be DISABLED (unchecked)
  • Auto Account Creation and Auto Account Update should be enabled or disabled according to your local policies.

Once the fields on the SSO Configuration screen for WebEx have been set up as described above, use the button on the page to export the SAML metadata.  This will create a file named webex_SP_saml2_metadata.xml.  Save this file and copy it to your ADFS server.

Now, on the ADFS server…

Create the relying party trust for WebEx in ADFS by performing the following steps:

  1. In the ADFS management tool, right-click the Relying Party Trusts folder and select “Add Relying Party Trust…”
  2. On the “Select Data Source” page, click “Import data about the relying party from a file” and use the Browse button to import the webex_SP_saml2_metadata.xml file that you exported from WebEx.  Then click Next.
  3. On the “Specify Display Name” page, type a display name for the relying party (like “WebEx”) and click Next.
  4. [ If you are using WS2012R2, the next screen will ask about multi-factor authentication.  Select “I do not want to configure…” and choose next. ]
  5. On the “Choose Issuance Authorization Rules” page, leave the default value “Permit all users to access…” selected and click Next.
  6. Review the summary screen, click Next and then Close to complete the wizard.  This will launch the claim rules editor.

Next, create four claim rules in ADFS as described below:

Rule #1:  “WebEx Name ID Claim”

This rule sends the user’s e-mail address as the “Name ID” claim.  The Name ID claim is a very common requirement for applications using federated SSO and is nearly sufficient all by itself for a successful login to WebEx.

  1. Choose “Send LDAP Attributes as Claims” and hit Next
  2. Enter the display name “WebEx send Name ID”
  3. Select “Active Directory” for the attribute store
  4. On the LEFT SIDE, choose “E-mail Addresses” from the drop down.  You may have to click on the down-arrow a couple of times before the list populates.
  5. On the RIGHT SIDE, choose “Name ID” from the drop down.
  6. Click “Finish” to save the rule.

Rule #2:  WebEx AutoCreate”

These four rules send the user’s email address for custom claims named “uid” and “email” and also their first and last names.  These values are used by WebEx to create an account for the user if they are not currently present in the system and “Auto Account Creation” is enabled.

  1. Choose “Add Claim Rule…,”
  2. Select “Send LDAP Attributes as Claims”
  3. Set the display name to “WebEx Auto Create User”
  4. Add the FOUR claims below, one per row.  For the left side, use the drop-down to select the item specified.  On the right side, type in (not select) the value listed without the quotes.
    * E-mail Addresses –> “uid”
    * E-mail Addresses –> “email”
    * Given-Name –> “firstname”
    * Surname –> “lastname”
  5. Click Finish to save the rule.

Rule #3:  “WebEx AutoUpdate”

This rule sends the value of the updateTimeStamp on the user’s AD object as a custom claim named whenChanged.  If Auto Update User is enabled, WebEx apparently uses this value to tell when a person’s basic information (i.e. their last name) has changed so it can update its record for the user to match.

  1. Click “Add Rule…”
  2. Choose “Send Claims using a custom rule” from the drop down (it’s at the bottom of the list).
  3. Enter “WebEx Auto Update” for the display name
  4. Paste the text in the box below into the claim rule window:
  5. c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"] => issue(store = "Active Directory", types = ("updateTimeStamp"), query = ";whenChanged;{0}", param = c.Value);
  6. Click “finish” to save the rule.  If you get an error, make sure that the rule was pasted correctly.

Rule #4 : “WebEx send authenticationMethod”

This is one of the “gotchas” that apparently is not well documented.

This rule sends the value “urn:federation:authentication:windows” as a claim named http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod.  This value must match one of the values listed in the AuthnContextClassRef field on the WebEx side.  In our case, we found that the default value provided by ADFS for a successful logon did not match what was in the AuthnContextClassRef and adding this claim brought them into alignment.  It may be that your own ADFS setup is sending a value which matches the value that is the default in the WebEx SSO but specifying it explicitly on both sides makes sure that things line up.

  1. Click on “Add Rule…”
  2. Choose “Send Claims using a custom rule”
  3. Enter “WebEx send authenticationMethod for the display name
  4. Paste the text in the box below for the claim rule:
  5. exists([Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"])
      => issue(Type = "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", Value = "urn:federation:authentication:windows");
  6. Click “finish” to save the rule.  If you get an error, make sure that the rule was pasted correctly.

Now, you should be able to go to your WebEx URL (https://mywebex.contoso.com), have your authentication handled by ADFS and then land back in WebEx as an authenticated user.

[ Revised 26-FEB-2015: Minor cleanup and wording. ]

Moving a mailbox from one user to another in Office365

[ IMPORTANT UPDATE March 2015:  Microsoft deprecated the procedure for moving mailboxes from one user to another which was described in this post and also removed the get-removedmailbox Powershell cmdlet on which it depended.  

The current approach to this seems to be built around the New-RestoreMailboxRequest powershell cmdlet, where the idea is that you take the mailbox that was deleted from user A and restore its content to a mailbox for user B.  

Microsoft’s current blog post discussing this topic can be found here.

The text that was originally on this page has been deleted because following those directions with today’s Exchange Online environment would result in data loss.  I have it on my to-do list to do some experimentation and document how to do this using today’s tools. ]

Provisioning two Office365 tenants from one AD: Making it work with Dirsync

[ UPDATE:  A new version of Dirsync was released in June 2013 which uses the username MSOL_AD_SYNC_ followed by a random hex value in order to make the username unique.  This helps with the two-accounts-with-the-same-username issue but the basic principle of filtering/scoping the two Dirsyncs still applies so I’m leaving this post “up”. ]

I recently assisted an educational customer who had two independent live@EDU tenants with upgrades to Office365.  They were previously provisioning users into the two tenants using two separate management agents on an old MIIS 2003 server  and keep all users in a single AD.  Therefore, we needed to find a way to make things work using only the Dirsync tool provided by Microsoft.

Dirsync is basically a preconfigured version of FIM 2010 which has two management agents in it:  one to connect to an AD and another to connect to Office 365/Azure AD.  What we needed to figure out was how to get two Dirsync installations to cooperate while working from a single source AD.

Here’s how I approached the problem.

1)  How to allow two Dirsync installations to connect to the same source AD:

Getting two copies of Dirsync to both talk to the same AD is easy.  When you run the Dirsync configuration wizard, it creates a domain account called MSOL_AD_Sync which is assigned a randomized password.  Obviously, running the installer for Dirsync #2 overwrites the password used by Dirsync #1 so the solution was to set up Dirsync #1, get it working properly (see below) and then repeat the process with Dirsync #2.  Then, set the password on the MSOL_AD_Sync account to a known value and update the configuration on both Dirsync installations to use that password.  Piece of cake!

2)  How to get two Dirsync installations to provision only the “right” users from the source AD.

I handled this one by taking advantage of one of the few supported customizations to Dirsync:  Using an attribute to limit the objects that Dirsync will handle.  

The first part of this task is to find a way to label all users with a value that indicates what tenant they should be synchronized to.  In my case, since users are imported into MIIS from two separate SQL databases — one for each set of users — I added a flow to the SQL server MA’s to include a value which reflected the user’s origin.  For example, users arriving from the database for students from database A were labeled as belonging to A and similarly for students arriving from database B.

At this point I had MIIS set up so that every user had a label to show where they came from… but this important information was still only within MIIS.

Next, I added a flow to the MIIS management agent for AD which copies the point-of-origin value into the extensionAttribute15 attribute on the matching AD user object.  Now I had  every user in AD labeled with where they belonged.  Progress!

The last step was to configure the two Dirsync installations to each EXCLUDE the “wrong” users.  The process for doing this involves creating a “connector filter” entry within the SourceAD management agent in each Dirsync that matches the value that is applied to users that you don’t want to sync.  This is a fully supported customization for Dirsync and is documented at http://technet.microsoft.com/en-us/library/jj710171.aspx.  An example of how this looks is in the screen shot below.  In this case, any user in AD with the value “A” in extensionAttribute15 will not be processed by Dirsync.

2-dirsyncs-filterA

While doing so isn’t an absolute requirement, I strongly recommend that you make these filter modifications after Dirsync has been configured but BEFORE the first synchronization run so you don’t create a bunch of stuff in Office365 that later needs to be cleaned up.

How this looks when you turn the crank — and ongoing impact to Dirsync:

When you run Dirsync the first time, it will load all users in your AD (“listed as Adds”).  This is normal.  Then, the filter will be applied and you will see a number of objects listed as filtered disconnectors.  These are the objects that Dirsync skipped when processing them because they matched the filter.

Then just verify in your O365 tenant that the right users have been synchronized and, if necessary, check your filters if you are still seeing things that shouldn’t be there.  In my case, I had to add an extra filter condition to also exclude users with no value set in extensionAttribute15 to keep out other non-student accounts that were in AD.

NOTE:  Because Dirsync will re-check all of these disconnectors every time it runs a processing cycle, this approach will increase the load on the Dirsync server and make processing take longer.  For the customer that I was working with, with a total of about 30,000 users between the two tenants, this has not been a problem.