eDiscovery in Office 365

Now that we’ve spent a few minutes to go over data retention and destruction in the service to make sure you’re retaining the data that you’re obligated to keep, let’s take a bit of time to make sure you’re able to quickly and reliably identify information. Why? Because there are times where you’ll need to export copies of data for legal reasons, delete phishing attempts, or just to find out if a message was read by someone.

I’ll spend time outlining PowerShell syntax in the examples here to help cut down on human error. Most people who are adverse to using the command line are usually uncomfortable because the syntax is difficult to remember or the process seems more arcane. It’s often beneficial to leverage the shell for discovery to enable you to be more granular with your searches in order to reduce the cost of first pass litigation by outside council (the numbers are scary). It also allows you to efficiently document the search syntax to ensure reliable internal knowledge transfer and simplify repeated searches by allowing the individual performing the search to copy/paste the actual search commands.

Before you’ll even be able to connect to the service via PowerShell you’ll need to make sure that you have the right level of permission. With RBAC (Role Based Access Control) it’s possible to scope permissions within the compliance center to allow only eDiscovery to individuals who may not need access to things like email security settings or the audit log. In most cases, the eDiscovery Administrator role will be enough for discovery activities, but if those individuals also need to be able to purge data they’ll also need to have the “Search and Purge” role assigned which is only part of the Organization Management role by default. A little more information on those permissions can be found here and a quick reference on how to assign role group members to a role is included there as well. And since it’s easy for an attacker to use eDiscovery to exfiltrate data, don’t forget to make sure that individuals with eDiscovery permission are required to use multifactor authentication as well!

Now that you’ve got the permissions taken care of and we’ve touched on ensuring MFA is enabled, you’ll need to connect to the security and compliance center using the Exchange Online PowerShell module. Once the module is installed, it’s just a simple one line command to connect.

Connect-IPPSSession -UserPrincipalName matt@collab-crossroads.com

Once you’re connected, the syntax is simple. We’ll be working with

New-ComplianceSearch
command to identify the potentially malicious message below:

The search syntax is simple. We know the message we want to identify was sent on 4/4 and has ‘Test’ in the subject and body. We’re not sure who else might have gotten this particular message that we want to remove so we’ll check every mailbox in the organization using ‘All’ for the Exchange location. The search is boolean so there are a couple gotchyas every now and then. In this case, note the parenthesis around the sent and receive dates. We need to make sure that those are considered first, so we put them in a block before moving on to the less complicated key word search in the subject. We won’t get into complexities of script blocks, but in our case just know that it’s just like simple math; we calculate what’s in parenthesis first before moving on to the rest.

New-ComplianceSearch -Name "Test Search" -ExchangeLocation all -ContentMatchQuery '(sent>=04/01/2019 AND sent<=04/05/2019) AND subject:"Test"'

Once you’ve created the search, start it using:

Start-ComplianceSearch "Test Search"
Depending on how complex the search is it might take a while to complete. In our case it’s a single message in a single mailbox so it completes very quickly and you can see the results using:
Get-ComplianceSearch "Test Search"

There are a number of really handy metrics that Exchange will report on that you might be interested in from time to time. The obvious ones are who started the search, is it completed, how long did it take, and how many items did we find?

If you pipe the Get-ComplianceSearch command to FL it will format the results in list form and show you everything that you can select from. Since we only want a subset of these results you can select specific items in results of the previous command.

Now that we’ve confirmed the results are what I was looking for we can choose to take action on those results using the New-ComplianceSearchAction command. Before choosing to delete anything always make sure to preview and export a copy:

A couple things to note here, first we’re deduplicating results in the export using the -EnableDedupe parameter. Next, we’re choosing to export a PST per mailbox that the results are in using the -ArchiveFormat parameter. Since the compliance center temporarily stores the export in Azure blob storage, it’s not possible to actually download the export through PowerShell so we’ll go into the compliance center to actually download the report. Under Search and Investigation and Content Search you’ll see the search we just created. Selecting the Exports tab will show the export as well:

Since Google deprecated ClickOnce you’ll need to use Internet Explorer to install the small agent required to download the export. Once clicking the “Download results” button within the export request you’ll be prompted to install the agent. Then you’ll be prompted for the export key shown below as well as an export location to save your export.

Now that we’ve made sure that we’re deleting the right content and have a copy of that content, let’s go ahead and delete it. The New-ComplianceSearchAction command will be used here as well.

 New-ComplianceSearchAction -SearchName "Test Search" -Purge -PurgeType SoftDelete -confirm:$False 

After verifying that the action was completed using the Get-ComplianceSearchAction command, you can see that the message in question was indeed removed:

In this scenario we focused on deleting a potentially malicious email (phishing, malware, or malicious insider) from a mailbox. The process of exporting data for a litigation search is identical, with the exception that you’re not actually deleting the data. Next time we’ll take a look at using the threat explorer to accomplish the same activity!

Data Retention and Destruction in Office 365

As organizations continue to invest heavily in cloud collaboration, the amount of information maintained in the form of documents, emails, or instant messages can balloon rapidly. Moving that data into a single space like Office 365 is a huge win in terms of discovery for admin and litigation teams, and usability for end users, but the incredible amount of space available to users means there’s a larger burden on administrators to ensure that data is both discoverable, and destroyed when it becomes older than the organization’s legal compliance period.

The first part of this is simple. You need to make sure that data retention is configured for your organization. Not only is this handy to meet your compliance requirements, it helps with overcoming the ‘oops’ moment for end users when they accidentally delete data, it enables you to take advantage of inactive mailboxes to save licenses, and it helps recover from malware in your tenancy by leveraging versioning in OneDrive and Teams.

To quickly enable retention, head to the security and compliance center and select ‘Retention’ from the Data Governance panel.

Retention_Nav1

When you elect to create a new policy you’ll be prompted to name your policy and identify exactly what you’d like your policy to do. In this case, we’re electing to retain data for seven years and also delete it after that duration. Depending on your requirements you may opt to maintain the data indefinitely without deleting it, or have a pure data destruction policy which deletes all data after a certain period of time. Note that the advanced options allow you to target specific data types such as personally identifiable information and financial information based on global standards. Additionally you can create your own data tags or policies to enforce there as well if you have a special requirement.

Retention_Nav3

Now that you’ve clarified how long you’d like to keep your data and if you’d like it purged after that period, you need to identify which data locations you’d like to apply that policy to. Note that you’ll need to create two retention policies, one for default locations like OneDrive, Exchange, and Sharepoint as well as a dedicated one for Teams chat and channel chat since Teams stores that data in an Azure chat service.

Retention_Nav4

Now that we’ve ensured that we’ve got our bases covered in terms of data retention we’ll circle back soon to discuss eDiscovery practices, search and destroy of malicious content, as well as leveraging the Threat Explorer in the compliance center to make that process easier.

 

So it’s 2019, and we’ve got phishing well in hand. Right?

Let’s face it, phishing is not new and forms of social engineering have been around for as long as we’ve been trying to protect information on the internet. Anyone who’s responsible for protecting their organization’s users from unsolicited spam or social engineering can tell you that phishing is definitely still occurring on a regular basis. What many organizations today don’t realize, is that not only is phishing still occurring, it’s becoming more complex, occurrences are increasing at an alarming rate, and users’ behavior is not changing at the same pace.

People Click Links

I had a great opportunity to hear a couple of my peers speak recently and they referenced Verizon’s Data Breach Investigations Report (DBIR) from 2018 and pointed out a few alarming trends. Phishing_Industries

First, let’s start with the scary one; sanctioned phishing campaigns uncovered that a small subset of individuals click on literally every link they get in their inbox:

“Unfortunately, on average 4% of people in any given phishing campaign will click it”

Phishing_ClickRate

Ok, so it’s only 4%. That’s not that bad, right? Well, consider a couple things; first, that individuals who have clicked on links in the past are far more likely to continue that trend, and that 4% of some of the largest organizations turns out to not be a trivial number at all. Walmart? 2.3M employees. JPMorgan? 256k employees. I wrote recently that a single compromised user gives an attacker a foothold in your organization and is often the start of most major data breaches.

So now that we know some individuals are susceptible, let’s take a look at the brighter side of that number; almost 80% of all users never click on a single link at all.

Reporting Incidents

It’s not a huge secret that today vendors rely heavily on samples being reported in order to improve detection rates. What is pretty interesting is that a vast majority of phishing campaigns go unreported, with only 17% being reported at all. This means that you have no idea how effective you are at blocking those messages inbound and that there are plenty of instances where potentially malicious content has been viewed inside your organization and you have no idea.

Phishing_Reporting

Bringing it All Together

Now that we’ve got a little transparency into some raw numbers, let’s spend a minute on a more positive note and outline some great features available to help combat the knowledge gap in end users and the drastic increase in inbound phish attempts.

Microsoft’s Security Intelligence Report outlines the increase in phishing messages their service identifies. They handle over 470 billion messages per month and saw a 250% increase over the span of 2018.

Phishing_Rates

As phishing campaigns become more and more complex, so has the way service providers protect their end users from 0 day threats. Microsoft leverages the sender side signals of those 470 billion messages to develop a user first contact graph to leverage machine learning for impersonation protection. On top of that, ATP adds SafeLinks and Safe Attachment protection for Office. The technology proxies every single end user click through a Microsoft server to validate the target URL before directing the user there.  The cool thing about that statement? SafeLinks works in Office, including Office Mobile for your remote users. URLs embedded in attachments are equally protected.

Microsoft certainly isn’t the only provider who’s making great strides on the email front; vendors like ProofPoint, FireEye, Palo Alto, Menlo, etc. have all innovated in their own right as well. The thing that sets Microsoft apart is that they handle exponentially more mail than all other vendors and leverage machine learning and artificial intelligence to make use of that data to exponentially improve protection for their users.

Keep up the Good Fight

Unfortunately the world isn’t going to become a peaceful place overnight and people aren’t going to suddenly become benevolent to their neighbors. While I’ll keep waiting for that day to come and doing my part to see it to fruition, I’ll also work just as hard to stay on top of emerging trends to make the internet a safer place for everyone to learn, collaborate, and enjoy a bottomless sea of cat memes.

Big thanks to Cam and Daniel for sharing sources for data.

 

 

Unique Local Admin Passwords – How and Why.

So here we are, early 2019 and just a few months removed from a data breach at Marriott that saw over 500 million guests’ personal information hit thSecurity_Oopsiese public internet. If that sounds like an insane number you’re absolutely right! That’s nearly 6% of the world population and about 150% the population of the US.

So the big question is how. How do things like this happen, how do attackers gain so much information, how do they exfiltrate the data, and how is it so common that Forbes is over here making cyber predictions for 2019!?

A while back I wrote about securing privileged access with a brief, high level overview of how an attacker will work to gain access in an organization and work to keep it. Part of that process is by making use of local admin credentials harvested from the first compromised machine to pivot around the network to see what else those credentials will work on. Sounds harmless enough, but what if the machine they gain access to is a domain controller? Or an admin’s workstation who happens to have administrative privileges on his daily account?

Security_killchain

During this process it’s difficult to detect because the attacker is using valid credentials to pivot around the network digging for more information or waiting for an admin to slip up. The first step to stopping this is by attacking the first link in the kill chain and limit privileged escalation of other workstations. How? By making sure that other workstation doesn’t have the same administrative password as the one the attacker already has.

Enter LAPS! The first part of the solution involves deploying an agent to each workstation and server on the network. That agent can be downloaded here. Since Microsoft was kind enough to provide the package as an MSI it’s easy enough to deploy with a script (msiexec /i \\fileserver\share\LAPS.x64.msi /quiet), Intune if you’re into comanagement, or good ole’ fashion SCCM. Definitely be sure to update your images to include the agent as well! Once that’s deployed and imaging has been updated, admin users need to install the management client on their PAWs (you are using a privileged access workstation, right?) to get the management client, powershell module, and GPO templates for management.

LAPS_AdminInstall

The next step is to make sure that the Active Directory Schema is taken care of. We need to extend the schema to include the attributes that we’ll be working with and then make sure to update permissions so we’ve accounted for users who shouldn’t be able to see those attributes. To extend the schema we need to use one of the admin workstations we just installed the management tools on and import the module in a powershell session and update the schema:

Import-module AdmPwd.PS
Update-AdmPwdADSchema

LAPS_SchemaExtension

After waiting a short while for that schema change to replicate we’ll move on to making sure only specific users and computers have access to the AD attribute that now stores these passwords. The first step is to remove permissions for groups at the OU level. If you’ve blocked inheritance you’ll need to make sure to take care of any nested OUs as well.

  1. Open ADSIEdit
  2. Right Click on the OU that contains the computer accounts that you are installing this solution on and select Properties.
  3. Click the Security tab
  4. Click Advanced
  5. Select the Group(s) or User(s) that you don’t want to be able to read the password and then click Edit.
  6. Uncheck All extended rights

*Be sure to preview all extended rights permissions first to make sure you aren’t removing permissions required by that group of users

LAPS_Security1

Now that we’ve taken care of scoping away permissions from unintended users/computers, we need need to make sure that the machines themselves have access to write to this attribute so they’re able to update their own passwords as they expire.

Set-AdmPwdComputerSelfPermission -OrgUnit "OU=Servers,DC=domain,DC=com"

LAPS_Security2

We’ll grant individual admin groups access to read those randomized admin passwords:

Set-AdmPwdReadPasswordPermission `
-OrgUnit "OU=Admins,DC=domain,DC=com" `
-AllowedPrincipals Domain\PasswordAdmins

And of course we need to account for password resets as well so let’s add reset permissions:

Set-AdmPwdResetPasswordPermission `
-OrgUnit "OU=Admins,DC=domain,DC=com" `
-AllowedPrincipals Domain\PasswordAdmins

Finally, you’ll want to hop into Group Policy on that admin workstation you installed the tools on to establish the password policy settings you’ll be enforcing on those workstations and servers.

LAPS_GroupPolicy

There’s quite a bit here to chew on but it’s not technically challenging so give it a try in a lab, then roll it out to your user workstations and eventually the servers as well! Again, the idea here is to limit lateral movement and make elevation of privilege more and more difficult for bad actors.

We’ll have another shorter session to go over some of the day to day admin tasks like looking up those passwords and resetting them, but that day is not today!

Securing Privileged Access – Part 1

After a little bit of a holiday hiatus we’re back to start a segment on securing privileged access. When we refer to privileged access we’re talking about everything from the different levels of administrative access, to privileged users who might handle extremely sensitive data for your organization.

So many organizations still believe that traditional firewalls are enough to keep the big, bad internet at bay. Well, in the modern enterprise it’s foolish to believe that you’re able to keep your data within any boundary while your users begin to work remotely, leverage third party SaaS storage (Dropbox, Google Drive, OneDrive, etc.), or you begin to host your data in an enterprise cloud like Office 365. I’m not here to tell you that those firewalls aren’t absolutely necessary, but it’s important to realize that the days of recognizing your firewall as the security boundary are over and you need to work hard to secure identity, regardless of where your data is hosted.

Recognizing that, let’s take a look at how a typical credential theft takes place in an organization:

Privsec_CredentialTheft

An attacker is going to establish a foothold in your organization by targeting end users with social engineering or phishing attacks. Once they’ve got access to that user’s computer they’ll start working on lateral movement, meaning that they’ll begin reaching out to other computers or servers on the network to see what else can be compromised. Maybe it’s by exploiting non unique local admin passwords, maybe the originally breached user has access elsewhere, or maybe an admin isn’t using a separate account for admin activities. Pivoting further and further until control of the directory database is gained (through actual domain admin permissions, by exploiting misconfigurations, or server agent configuration).

Privsec_Stage1

The first step is a simple one, and to many organizations it’s already very well ingrained in IT culture. Administrators need dedicated administrative accounts that are not shared with any other admins. I work with too many customers where this isn’t that case. Some have a generic account with domain admin that’s used for automation or just ‘general use’, or they flat out grant admin permission to their day to day account. Admin roles in your org should be reported on regularly to identify these and there should be alerts for elevation of privilege with an actual process for following up on those.

The next steps are again, not technically challenging but require culture change that many admins are adverse to. Privileged Access Workstations need to be deployed for users owning high value admin roles, and unique local admin passwords need to be deployed to workstations first and finally servers to help stop lateral movement. I’ll reach back to these two topics later with dedicated articles, but for now; know that it needs to be accomplished and needs to be prioritized.

Now that we’ve touched on high level goals of attackers and the first steps required to secure privileged access, I’ll follow up soon with part two. I’ll be working through Microsoft guidelines published here so definitely feel free to read ahead a little and ask questions!

Implementing Group Based Licensing in Office 365

So here we are on election day and if you’re like me, you’re probably more than a little bit ready to think about something other than someone else’s political opinion. Well, here I am to help you out with a little diddy on licensing your users in Office 365.

Since managing licenses for thousands of individuals can become a struggle, most organizations will use some kind of automation. Something like the sample below can be scheduled to run and apply licenses with specific features based on a specific scenario. This works great if you don’t have any other options, but group based licensing doesn’t require any kind of on premises (or Azure) automation so if you’ve got licensing for it, definitely use it!

if($_.UserPrincipalName -like *@domain2.com”) { 

# Disabled Plans – Customize to meet the needs of AA 

 $DisabledPlans= @() 

 $disabledPlans +=“Stream_O365_E3” 

 $disabledPlans +=“TEAMS1” 

 $disabledPlans +=“DESKLESS” 

 $disabledPlans +=“FLOW_O365_P2” 

 $disabledPlans +=“POWERAPPS_O365_P2” 

 $disabledPlans +=“OFFICE_FORMS_PLAN_2” 

 $disabledPlans +=“PROJECTWORKMANAGEMENT” 

 $disabledPlans +=“YAMMER_EDU” 

 $disabledPlans +=“EXCHANGE_S_STANDARD” 

 $disabledPlans +=“MCOSTANDARD” 

Set-MsolUser -UserPrincipalName $_.UserPrincipalName -UsageLocation US 

 $AccountSkuId “org:LicenseName” 

 $Option New-MsolLicenseOptions -AccountSkuId $AccountSkuId -DisabledPlans $DisabledPlans 

 Set-MsolUserLicense -UserPrincipalName $_.UserPrincipalName -LicenseOptions $Option -AddLicenses $AccountSkuId 

}

Elseif ($_.UserPrincipalName -like *domain.com”) { 

 #Disabling only EXO for another business unit 

 $DisabledPlans= @() 

 $DisabledPlans +=“EXCHANGE_S_STANDARD” 

Set-MsolUser -UserPrincipalName $_.UserPrincipalName -UsageLocation US 

 $AccountSkuId “org:LicenseName” 

 $Option New-MsolLicenseOptions -AccountSkuId $AccountSkuId -DisabledPlans $DisabledPlans 

 Set-MsolUserLicense -UserPrincipalName $_.UserPrincipalName -LicenseOptions $Option -AddLicenses $AccountSkuId 

}

 

The struggle is that you’ll want to use a dynamic group to do it and it will require a filter to work. If you apply a dynamic group and the filter is wrong you might unlicense your users or overcommit, causing service disruption for those users. The first step is to determine which users will need which licenses. The easy ones that need to be considered are any users with mail hosted in Exchange Online that require licensing (everything but resource, shared, or discovery mailboxes). Those mailboxes will need to be included in the dynamic group we’ll create next, so let’s filter out everything else that needs to be excluded.

$Resources = Get-RemoteMailbox -resultsize unlimited | where {($_.RecipientTypeDetails -ne ‘userMailbox’) -and ($_.recipientTypeDetails -ne ‘DiscoveryMailbox’)}

 

Now that we’ve gathered what needs to be excluded from the group, let’s update any on premises attribute that’s replicated to Azure and can be filtered . I prefer to use ExtensionAttribute 1-15 if they’re available, but also leverage the ‘info’ attribute on premises so you can be granular with scripting logic later if you have to. In my case I chose to filter out anything with the word ‘Resource’ in ExtensionAttribute1:

$Resources| foreach{

[string]$upn = $_.userprincipalname

$user = Get-ADUser -Properties info,extensionattribute1,distinguishedname -filter {userprincipalname -eq $upn}

Since the info attribute is multivalued we’ll want to make sure we don’t bulldoze what’s already in the attribute before setting it. In this case I’m checking to see if there’s anything there and if there is, we’ll add ‘Resource’ on a new line in the same attribute.

if($user.info -eq $null)

    {

    Set-ADUser $Sam -Replace @{info=‘Resource’;extensionAttribute1=‘Resource’

    }

    Else{

        Set-ADUser $Sam -Replace {info=$($user.info)`r`nresource;extensionAttribute1=‘Resource’

        Set-ADUser $Sam -Replace @{info=“resource”;extensionAttribute1=‘Resource’}

        }

}

 

Great! Now that we’ve set an attribute to be excluded by the filter, let’s make the dynamic group in Azure to assign those licenses to. Since I’m a Shell kindof guy, here’s a sample to create the group.

New-AzureADMSGroup -DisplayName “Licensing – E3” `

-Description “Dynamic group created to automatically assign licenses to mail enabled users” `

-MailEnabled $False -MailNickName “group” -SecurityEnabled $True -GroupTypes “DynamicMembership” `

-MembershipRule “(user.mail -ne null) -and (user.AccountEnabled -eq True) -and (user.extensionattribute1 -ne ‘Resource’)” `

-MembershipRuleProcessingState “On”

 

Now that the more complicated portion of creating the dynamic group that fits your users, the last thing left to do is follow the simple documentation to assign licenses and features to that particular group.

Here’s to my favorite kind of people out there, those who know how to stuff the ballot box as well as their faces! #VotePizza #ChicagoStyle #LouMalnatti’s

VotePizza

Exchange 2019 – Why?

With the formal release of Exchange 2019 the Exchange world was shaken up (yet again), and the main question most of us have is why? Since upgrading Exchange in your environment isn’t exactly a small task, why should you jump to the new, fancy flavor? Well, let’s hop right into that!

Security

security

Exchange 2019 is the first flavor of Exchange that fully supports deployment on Windows Server Core. How’s that a security improvement? Well, since Core is lightweight, containing only the essentials, there’s a drastically reduced attack footprint.

Not sold on Core? How about this; Exchange 2019 out of the box will only use TLS 1.2.

Uptime

uptime

We talked about Core, right? Well since it doesn’t install features and components that are not absolutely necessary (Internet Explorer?, Media Player?) there are fewer patches to be deployed, and fewer still that require reboot. Assuming you follow the preferred architecture when you deploy there should be no problems with rebooting, but why chance it when you don’t have to?

Not only that, but with major enhancements to search indexing the catalog fails over much, much faster and who isn’t a fan of that?!

 

Performance

With Exchange 2016 there were scalability struggles. Manufacturers started producing larger physical servers and Exchange supportability flat out didn’t cover those, which lead to complicated virtual deployments and customers working outside of supportability guidelines. Exchange 2019 now supports up to 48 (physical) cores and 256GB of RAM.

Search was also drastically changed to leverage Bing technology causing failovers will happen more quickly and reliably. How, you ask? By storing the search indexes within the databases themselves and changing index data to be shipped along with database log shipping.

SSDs! For the longest time they were supported, but not technically recommended due to cost and capacity. Well, the read latencies of spinning disks haven’t improved at the same pace as physical capacity did. The struggle here is that it’s tough to read TBs of data fast enough on disks that are only 7,200RPM. How was that addressed? MCDB (MetaCache DataBases)! Basically, part of the most actively accessed data is stored on the SSDs so it improves performance drastically. Since MCDB is entirely new and a little complicated I’ll come back later and write about it in detail soon.

EX2019_MCDBGains

Up next is a two part about preparation and installing!

Managing Major Changes to Office 365

Admins are expected to keep tabs on major changes to the service and understand how those could impact uptime, or just good ole’ fashioned end user experience. Unfortunately, there are hundreds of items coming down the pipe that may or may not be relevant to every organization and it’s difficult to follow it all.

The first part of making that manageable is understanding how Microsoft’s release options work. Updates are released to different update ‘Rings’ as the updates become more mature. The first three rings are Microsoft release teams where Microsoft consumes the updates first, prior to being formally released to the rest of the world.

M_O365_UpdateRings

After that, you can select to add friendlies to the targeted release group. These would typically be IT groups or power users who will be a little more adaptive to change. Here are a few of the benefits of making sure you have decision makers in the targeted release group:

  • Test and validate new updates before they are released to all the users in the organization.

  • Prepare user notification and documentation before updates are released worldwide.

  • Prepare internal help-desk for upcoming changes.

  • Go through compliance and security reviews.
  • Use feature controls, where applicable, to control the release of updates to end users.

Here’s how you can add individuals to targeted release.

Handy, right? Super common use case here; Teams were released about a year ago and most customers had no idea exactly how it would impact them or how their users would cope. The feature was added to enterprise licensing and enabled by default upon standard release. The trouble? End users were able to go to http://teams.microsoft.com and create a new one with any name and any picture which shows up in the address book with (at the time) no central management and can potentially be shared with the wide world of the internet. Mayyyybe you might want to review that feature to make sure you have controls in place that match your organization’s goals.

Facepalm

In addition to release options you’ll want make sure that your team is managing the message center for major updates.

M_O365_MessageCenter

Updates in the service come at you pretty fast, but Microsoft does a pretty decent job of providing information and allowing you to plan for them. Make sure you keep an eye on the roadmap, the message center, and of course make use of targeted release to find out how those changes will impact your users!

Now don’t mind me while I go drown myself in meaningless college football games!

Saturday

Securing Exchange Online

For those of you who weren’t lucky enough to attend, or simply didn’t know it was happening, Microsoft wrapped up Ignite last week and left us with plenty of savory info!

For most of 2018 I’ve been working with customers of all sizes and helping them to understand security in Office 365, namely in One Drive for Business and Exchange Online. Most customers are at least a little leery about placing critical workloads like Exchange and file storage in the cloud without understanding at least a little bit about how that information is secured. Well, Microsoft took a moment to outline a few simple configuration changes that will immediately improve your security stance in Exchange Online.

So to get this shindig kicked off, let’s take a look at the different stages of breach. When defending against threats it’s important to understand this as a killchain and defend as far to the ‘left’ in the killchain as possible and of course, throughout.

S_EXO_Attacker Killchain

Attackers will perform a discovery, or recon of your tenancy to understand more about your users. During this phase, they’ll try to determine more about your users and what they may be able to use to exploit them. This may be as simple as checking your company’s webpage or wikipedia to find out who senior management is in order to either target them or use their information to convince others.

After they’ve gathered enough information, they’ll try to use it to actually breach your organization. Things like password spray, brute force, and just good ole’ fashion fishing will be their tools of choice.

Once they’ve gained access, then the exciting stuff starts! They’ll first enumerate who your users are, including your admins and specifically target them. Not only that, they’ll do their best to make sure that they retain access to anyone they gain access to with things like inbox rules, mobile device enrollment, delegation, external sharing requests, etc. Finally, they’ll use common practices like eDiscovery, mailbox protocols, and external forwarding to grab your information and sell it to the highest bidder!

So how do you protect yourself? Again, focusing to the left in the killchain we’ll want to start with end user education. Educate your users around things like passwords and what kind of information they divulge in social media. Then we’ll do our best to combat the initial breach. Enforcing things like extranet lockout in ADFS, ensuring mail authentication (SPF, DKIM, and DMARC) are set up correctly, and limiting app framework in Azure.

To address elevation of privilege, we really want to focus on admin roles and impersonation. First, multifactor auth is an absolute requirement for administrators. That’s non-negotiable. Impersonation should only be used when absolutely necessary, and that’s pretty rare. Some third party apps require impersonation to work, but other than that, almost no one should be grated impersonation permissions. Since it doesn’t change too often, a pretty easy way to review it would be to imply set up an alert policy in Office 365 to alert an admin when impersonation has been granted. If it wasn’t you, then remove the access and start an account breach remediation  process on the person who granted it.

Next, we’ll want to address entrenchment. Attackers will do their damnedest to retain the access/information they have. First, look into Azure information protection. If your information leaves the organization but is encrypted, it’s much better than the alternative. Attackers will create inbox rules for end users to automatically forward messages or to hide responses or remove responses based on key words like ‘Helpdesk’, ‘phish’, ‘hack’, etc. Then they’ll forward mail outside of the organization that looks interesting to them. Allowing automatic forwarding outside of your organization needs to be disabled unless absolutely necessary.

Finally, they’ll use admin roles to export all the information they can. You need to make sure that you don’t grant discovery permissions to people who don’t need it. Then you need to alert on who exports that information and audit on what else that person did.

There’s a lot here, but start with some simple things. Enforce multifactor authentication for both admins, and all users. That will drastically reduce the risk to your end users. Next, disable legacy protocols like POP, IMAP, and SMTP Authenticated Send to ensure that attackers can’t bypass MFA and can’t use accounts they have access to, to phish your internal users. Finally, set up alerting policies for forwarding, elevation or privilege, and eDiscovery.

Don’t believe me? Watch the presentation from Ignite:

https://myignite.techcommunity.microsoft.com/sessions/65645#ignite-html-anchor

 

The Journey Begins

And here we are at the Crossroads of collaboration and productivity. With this little diddy I’ll make my foray into the wild, wild world of technical blogging.

I’ve been lucky enough to become an established engineer and have claimed some of the most iconic brands and organizations in the world as my customers in the last ten years. My goal is to share some of the knowledge I’ve gained along the way with hopefully, a bit of a smile as well.

Here you’ll find tips, tricks, and news about technical collaboration solutions used by organizations today. Everything from specific script samples, deployment guidance, and migration scenarios to major news released by industry leaders.

I’ll start with the message that struck me quite a while back and hope it does the same for you. Steve Jobs paid forward to the graduates of Stanford University in 2005 when he sent them off into the world with four words. Stay hungry, stay foolish.

For those of you who don’t recognize the reference, Steve was referencing the closing message of The Whole Earth Catalog. His message was one of ambition, living life, and embracing risk.

Education begins the gentleman, but reading, good company, and reflection must finish him. – John Locke

post