Friday, September 30, 2022

Optus and how to DevOps badly in a few easy steps

Full disclosure: I put Optus on the title because those LinkedIn articles avocate the need for clickbait to attact viewers. Problem it, I am actually going to talk about this company. But, this article is really about code development gone bad; Optus just happens to be the perfect example of, in the words of Jeremy Clarkson, what could possibly go wrong.

We are Agile!

In earlier, simpler times, the recommended software development lifecycle model (SDLC for you acronym addicts) was the Waterfall Model. There are many places which describe it better than I could ever do, so suffice to say that it is linear and starts with the idea, then goes to the design, and then a few steps including coding and testing until it is deployed and goes to the maintenance mode. In other words, you start with an idea and then ends up with a product.

Making the code secure, or implementing (Buzzword time!) privacy by design was fairly easy if the security and privacy team was involved fromt he get going, as that was just another well-defined step.

But, what if the product needs to be changed? As in not just a patch but feature request or something that requires a new library or user interface redesign. You need to go back to the start.

You can say it is a bit rigid, and many people agreed with you. Next step was modifying the model so you could hop back one step or two, and that started to get messy. The bottom line is it does not take changes well. In many fields that is completly fine. However, for code which is always changing and put into production as soon as changes are done, like in a website, it can slow down delivering a working product. In some industries, who puts it out first, even if it is not perfect, wins. So, we need something better.

We evolved into the Agile model, which. as a friend taught me, is also called the "Never Finished Model." What the joke implies is that this model is designed to handle changes quickly and deliver a working product even if it is not perfect. The reason is that you can improve on it later once you have some feedback from customers.

The following picture shows a typical Continuous Integration/Continuous Deployment (CI/CD) pipeline, which is a trademark of using the Agile model in code development. How do we account for security and privacy here? DevSecOps places security controls in the CI/CD process of DevOps. Note the two red boxes: they are the points where we add security testing to the cycle. one is for the Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST). The red arrows indicate that funny business they find is then send to something which then logs and reports them by creating tickets or sending emails or something else. This is of course, ideally supposed to be done in conjunction with training developers in secure coding, (Buzzword Alert!) pivacy by design, and whatnot.

In reality, some companies/developers which should know better decide that slows them down and hampers their style. In other words, they nee to be putting new code out with new features, and privacy and security are not features but

Enter the Optus

Singtel Optus Pty Limited, a.k.a. Optus is the second largest wireless carrier in Australia. In the last week of September 2022, Optus reported that on 22 September 2022 it was victim of a very sophisticated cyberattack by members of a criminal or state-sponsored organization. This attack resulted in a major personal data breach, where the names, dates of birth, phone numbers, email addresses, street addresses, drivers licences, and passport numbers of both current and former customers was leaked. Optus chief executive Kelly Bayer Rosmarin said that they "are not aware of customers having suffered any harm."

Insert here the videos of a guy in a hoodie in a dark room and computer screens showing random Linux output.

What does this very sophisticated cyberattack have to do with coding?

Glad you asked.

You see, later on it was found Optus had an unauthenticated API, http://api.www.optus.com.au, that released all of the personal data it stored, not only of current but also previous customers (there is the case of someone who has not been an Opus customer for the last 14 years and not only received an email from them about the breach but also started to be flooded with spam). We are talking about data from 10 million people. Unencrypted.

Optus detected the event when the attacker started hitting the AIP hard.

So, the questions are

  1. Why did it have an exposed API without some kind of authentication? Perhaps that was originally done to allow testing of the API more convenient by developers. I myself have seen that in the wild. When developers/DevOps from the environment in question were asked to at least limit access to a network only reacheable from behind their firewall, they shrugged it off saying the VPN (which is not a solution but sure is an improvement) was too cumbersome to use from their personal laptops.
  2. Why was the connection to said exposed API unencrypted? Do you remember when we said that DevSecOps places security controls in the CI/CD process? That probably would have caught that: the SAST would have noticed the unencrypted connections in the code; the ones I have used before would bark at unencrypted traffic (and hardcoded passwords, which was not the case here since no passwords were used). In the real world that does not happen as much as people believe. In fact, it is too common to hear that devsecops slows down the of devops' work.
  3. Why was the personal data stored unencrypted? Once again, convenience. Maybe when that was recommended, it was then turned down because developers argued it would slow the response time of the system. Once again, SAST would have caught that.

Clearly there were poor security practices at play here. Perhaps DevOps security and privacy training never happened, or SAST/DAST was never implemented in the SDLC chain. Usually that happens because they are considered cost centers in business that, as mentioned earlier, slow down progress. Remember we mentioned that automated security testing will create tickets developers will have to deal with in addition to the other tickets they already have on their plates.

Post Morten

Don't be that guy!

  • Privacy by design would not have allowed this kind of code to even make into the repo.
  • Encrypt the traffic to the API, period. Ideally that should be done at the API level. I know some people will put a Nginx proxy in front of the unencrypted API (using kubernetes or docker), and I cringe about that: it is an improvement from the Optus setup but not by much.
  • Encrypt your data at rest. Yes, that is specially important for personal data, but it is a good habit regardless.
  • All connections to an API should be authenticated by default. If you have a query, say list status, you want to make available unauthorized to users, spend some serious time thinking on the consequences.
  • Ensure your CI/CD process has proper security controls. If DevOps is being swamped with the tickets generated by these controls, this may either mean they need more security and private training, or the controls need better tuning, or the external code/libraries you rely on are not as well written as they should. That is how BadUSB and many of the IoT issues came into being.

Monday, September 26, 2022

Phishing Is Too Easy - 3

Last week I received another traditional phishing email; apologies for the lack of images because my email account is setup not to load externally attached pictures. Here it is, with my address removed:

Phishign email disguised as an invoice with attached PDF pretending to come from Norton

Yes, this is pretty much a variation of the last one I commented on months ago, namely:

  • It is an invoice for some product, in this case it implies to be some kind of Norton product.
  • It creates a veil of credibility by alluding itself (blue box) in a rather half-ass way to be related a real company. Note it claims to be "Norton Support LLC," which I have no idea who it may be. Since the average person probably heard of Norton, who sells an antivirus and other security products, it is easy for said person to associate both.
  • Still on the credibility standpoint, the sender address is supposedly from quickbooks (I did not bother to check the header). Yes, a large company right Norton would not be using quickbooks to send its bills. However, if you have to deal with purchasing you probably have seen invoices from smaller business which use the online quickbooks site; when they send their invoices, their invoices will have "<quickbooks@notification.intuit.com>" as the email. But, we hope they will look more like "Something Of Doom LLC <quickbooks@notification.intuit.com>" instead of "Intuit E-Commerce Service <quickbooks@notification.intuit.com>"; I think the later is not the default value, but it sounds credible enough.
  • To create the urgency, the invoice is for $800. That will make someone's heart beat a bit faster and immediately want to open the attached PDF file (red box) to find what this invoice is all about. This is a bit lazier than the last phishing email we posted about as some mail services will disable attachments with macros in hope to block malicious payloads. However, most of the mail services do not do that; mine could not be bothered and told me if I want to see it, and be properly infected, I need to have Adobe Acrobat Reader (green box). Since my mail service does not automagically open anything, I have some extra time to read the email and decide what I want to do next.
  • It provides a number which may be tied to the phisher (VoIP?) so if the frantic recipient of the email calls, the phisher (we called him Peggy in the last phishing post) can then social engineer his way into the victim's computer.
  • The return address is a typical quasi-randomly created Gmail one; they could not be bothered with making it sound like it came from a billing department as it claims to be.

How effective it is? I think it depends on where people will focus on. The phishers hope their marks will see the value of the invoice -- $800 -- and immediately open the pdf to find out what is going on. The best thing to do here is stop -- but not stop/drop/roll as you are not on fire -- whenever you see something suspicious, specially when it claims to be urgent. Then ask yourself if you expected an invoice from Norton. Then look at the email addresses and see if they are not overly suspicious.

Remember: phishers are lazy, and they hope you are equally lazy!

Saturday, September 17, 2022

There and back again: DEFCON 30

Second slide in the workshop reminds the audience we had put instructions on github for what to do before attending the event.

No, I did not postpone posting about my trip to DEFCON30 until now because I did not have anything to post this month. The truth is I was slacking. There, I said it.

This will be a bit of a post morten of our workshop. Will this post have any useful info? Don't hold your breath; what I can promise is there will be many opportunities to laugh at our expense.

The Plan

For those who read the announcement for our workshop at the Crypto and Privacy Village, you know that there are two authors -- Matt and yours truly -- who put together the mess without killing each other; the fact we had half a continent between us probably helped.

Originally, the plan was to start with an explanation of why this phone privacy thing was so important and then get show how to do it. Ideally people would have read the announcement, followed our instructions, and show up with a phone ready to be configured. While one of us would be on the podium, the other would then be helping the audience.

After we had the entire workshop done and did a few dry runs, we started thinking: how many people will bring a phone that meets the requirements? Probably not many -- not many people have spare phones that can take CalyxOS or LineageOS in their kitchen drawer -- and we will not be able to bring enough loaners as all the resources in the workshop are coming out of our own pockets. We could just shrug it off and tell people "Hey you did not bring a phone, so we will bore you with screenshots."

Thing is, we had taken a lot of screenshots of everything we would be showing in the phone, in case we would not be able to share the phone screen or point a camera at it. So, this was an option but we felt that would detract from the workshop; instead of being something interactive it would be no better than watching a video.

We needed a plan B.

What if we provided an emulator? It will not do everything a real phone can but it will allow the audience to follow along on their laptops. Since we were going to focus on CalyxOS (we had only an hour to run the entire workshop; compromises had to be made), we then decided to create that image, make it available somewhere, and then update the wiki with instructions on how to use it. We also asked the Crypto and Privacy Village (CPV) people to add a single line in the workshop announcement, indicated with a green line in the picture below, to tell people they should install Android Studio in their laptop.

Wrokshop announcement, with the line 'Alternatively, a laptop with Android Studio installed' added to it, indicating you may want to install it if you do not have a phone to use in the hands-on bit

The plan was to have everything finished two weeks before the event and then take the last week to practice, and ensure we had a reliable way to hand out the emulator images.

Things did not happen according to the plan.

Matt was able to go to DEFCON from the beginning of the event; I do not know if he also was able to stop by BSidesLV. I, on the other hand, was a bit more time constrained: I flew the first flight on Friday and was going to return on Saturday after the workshop. In any case, we were going to try to attend as many events and talks as possible, and meet up with people we have not seen in ages. I also planned on volunteer to the CPV.

What really happened?

  1. Building the CalyxOS phone image was not as smooth as we hope for. In plain English, I could not make it work. I had no issues building LineageOS ones in my docker build environment -- if someone reminds me I can post instructions on how to do that later -- but CalyxOS was fighting me all the way. Fortunately we were working in parallel and Matt was able to make it work.

    I will let Matt post how to create the CalyxOS image with all the apps already installed in his blog, as he is the one that made it work. In fact, it worked so well, he used that instead of a real phone during the hands-on part of the workshop.

  2. We spent too much time trying to come up with a clever way to deploy the phone image. After days of frustration we came up with a simpler way to do that, wrote the docs that worked whether you had a Linux, Mac, or Windows laptop, and put it with the image.
  3. The emulator stopped working. I do not know why but it went on strike. More frustration ensued. Was it the emulator itself or the image? Once again Matt rose to the occasion and made it work.
  4. We also found out it would take too long to download the image we built using the DEFCON public network. Fortunately we had a bunch of USB drives and decided to put in each, formatted in some Windows file system so all 3 OS could mount them, the image and instructions.

There are probably more things that went wrong, but I cannot think of them right now. Bottom line is we spent most of the time that week working on these bugs. And, we made it work.

Showtime

The CPV people did a great job. Everything was working smoothly on their side. I did most of the overview and then Matt took over for the technical part:

Matt Nash presenting the hands-on part of the workshop. Audience is spaced out following the social distancing requirements

You will note on the above picture the audience (picture was taken from the back out of respect) has set some chairs apart for social distancing's sake. I then came back from the podium sporting one of my favourite shirts (bonus points if you recognize it) with the final comments and we then took questions. After it ended, Matt was surrounded on the podium with members of the audience for a long while until the Defcon Goons kicked us out.

Mauricio Tavares on the podium spreading lies and misinformation while sporting the classic Oregon Trail shirt.

Thank you for all the fish

  • Avi Zajac and the rest of the Crypto and Privacy Village crew for not only having us there but making the event possible. And the badge. And the shirt (I am afraid of wearing it out because it is nice). And keeping the Goons at bay. And the sticker!
  • The NCC Group for mentioning us in its August announcement.
  • DEFCON for, well, being defcon. I do with I had more time to see it all this year instead of being in a hotel room trying to get all working. But, it was all worth in the end.
  • CalyxOS for trying to make a more secure and private Android distro easier to install. There is more around this line item, but I am getting ahead of myself.