Thursday, December 16, 2021

Apache Log4j and IoT

Yes, this is yet another blog article on the vulnerability in the Apache Log4j Java-based logging framework, which was first disclosed (CVE-2021-44228) in Dec 9, 2021. Many brave souls spent their weekend patching their servers and other computers to version 2.15.0. And then, a new vulnerability was found in its replacement (CVE-2021-45046), which requires an upgrade to 2.16.0.

And then 2.16.0 did not solve CVE-2021-45105 so we are now (Dec 19) on 2.17.0.

There are right now many great articles by brilliant people on how the attack takes place, what you can do to detect whether your (Apple, Linux, Windows) systems are affected, and how to prevent it. Therefore, this article will do the unthinkable and instead focus on the security and privacy impact on Internet of Things (IoT) devices.

The Problem

According to IoT Analytics, the global number of IoT devices should be around 12.3 billion. That is a lot of coffee machines, fridges, Nest thermostats and cameras, Amazon Ring, smart televisions, insulin pumps, and talking toasters. And some of them are voice operated thanks to Alexa, Cortana, and Siri. How many of them have their firmware/OS updated once deployed? Is that trigger by the user, run on a schedule in the device, or pushed from the mothership to the appliance? How many of them can have their firmware/OS updated to begin with? And, so we can keep this on topic, how many of them use log4j?

Even without this new vulnerability, IoT devices are not known for their initial security settings or capability to be upgraded to remain secure. Nest is actually one of the better ones, but attacks to it have been documented, including simply hacking into the users' accounts to identify the patterns and find the best times to rob their houses (ideally when you are away for a few hours).

Some claim the strong point of most of the IoT devices is they are connected using WiFi, as if that makes it more secure than being connected using ethernet or fibre. In other words such a device "is completely a wireless device that has a low tendency for vulnerabilities." That assumes the wireless network is impervious to attacks; the reality is not the case. First it does not require the attacker to be physically in the location; just being in the parking lot suffices. Second, it gives the upper edge for patient criminals. Finally, someone can break into the website used to manage these devices and push some malicious payload which can help scan the target network/traffic in search of vulnerable devices. The log4j one is but another vulnerability in their arsenal.

Privacy Impact

The kill chain here is business as usual:

  • Get a foothold through an IoT device
  • Upload shellcode and/or packages to this device
  • Use device to can the network to learn the way of the land and to locate more potential targets.
  • Rinse and repeat until finding useful data, be that in the form of files and passwords or just enabling microphones and cameras.

The attacker who is there just for the joy of breaking in will then post captured pictures and videos, and send messages back to the IoT device owners as shown in the previous video. The more malicious attacker will harvest as much personal data -- account info for other services, medical info, videos and sound recordings -- as possible that not only compromises the current victim but also future ones known by current target.

"So, where's the privacy impact?" you may rightfully ask. When our criminal friend successfully exploited the vulnerability, he committed a security breach. Now, when he then stole medical records, credit card info, account information, and even monitored the house in the last paragraph, he commited a privacy breach; and that is where the money is.

Let's revisit the video I linked earlier. The Merriam-Webster dictionary defines privacy the quality or state of being apart from company or observation. That attacker can view and listen to everything that family does in their home, so per definition this family's privacy is compromised.

Let's now look at it from a business standpoint: the fines imposed by GDPR for exposing personal data from a person (GDPR calls that a Natural Person) is up to 20 million Euros or 4% of the annual global revenue (table stolen from a previous article). I am not saying it will be always that much, but the data protection authority will not be pleased if this was due to devices that were designed so they cannot be updated.

But don't take my word for it. We already mentioned how serious the GDPR is about data breaches. It is now alone; the NIST is also concerned about the IoT security, and created a program to help governments, industry, academia, and consumers become aware of the issue and minimize its impact.

Useful resources

Wednesday, November 17, 2021

German Federal Office for Information Security leaks private encryption key

BSI office in Bonn. Photo by Oliver Berg, DPA

The Bundesamt für Sicherheit in der Informationstechnik (German Federal Office for Information Security) (BSI) is the German federal agency in charge of managing computer and communication security -- critical infrastructure protection, internet security, certification of security products -- for the German government. It also advises manufacturers, distributors, and users on identifying and minimizing data security risks. Like many organizations, when the GDI needs to send or receive an encrypted email, it relies on the OpenPGP standard. It creates a public/private keypair and then shares the public key with anyone who wants to send a message. That not only allows senders to know only BSI can decrypt their messages but also verify it was indeed BSI who replied to it.

Early this month Golem.de reported that the BSI, when asked for the public key by someone who wanted to submit confidential information, accidentally sent the corresponding private key. Eventually this key was revoked and a new keypair was issued, but that happened only after Golem.de contacted the BSI.

Questions

  • How long has the BSI known of that data breach? Per GDPR's Article 33, a data breach should be reported to the nearest Data Protection Authority within 72 hours of the Data Controller, in this case the BSI, becoming aware of it.
  • How long did it take for its Data Privacy Officer (DPO) to inform the data subjects? Per the Golem.de article, the BSI kept using this key pair for months after the incident.
  • How many data subjects were affected by this data breach?
  • Has it immediately created a new keypair and let all those who rely on that know of the incident and receive the new public key? According to the Golem.de article, the BSI still used that keypair for months after the incident.
  • The BSI stated the key was password protected. One must assume the (accidental) recipient of this email had the password. However, we have no idea of the strength of password. Because of how the SMTP protocol works, there are many opportunities to get a copy of the email containing the encrypted key as attachment. And then, it becomes the classic password cracking routing in the comfort of your home.

Notes

While many are looking at this event from a security standpoint, this really should be seen from the privacy standpoint:

  • The impact on the privacy of those who have used this compromised key is yet to be understood. How much personal data has been exchanged using this very key under the assumption it was protected from prying eyes? This encryption key is probably only used to transmit sensitive data; once it is received, it is downloaded, extracted, decoded, and then acted upon. So, unless it is deleted, it is stored in encrypted format at the customer's and the BSI destination mail accounts. And it is possible that the mail transfer agents (MTA) that act as hops between them, storing and relying to the next hop, also keep a copy for a period of time.
  • While they probably did a Data Protection Impact Assessment (DPIA), one must assume this kind of data breach was not a possibility they considered. However, hindsight is always 20/20: it very easy to, after this breach is exposed, point fingers at them, but I would like to know any company which accounted for this in their risks from their accidental or unauthorized actions list before this incident.
  • Privacy by design is a lofty goal, but in reality it can never be fully achieved. Best we can do is take this as a teaching moment. For instance, we expect that the BSI will now implement a system in which private keys can be used but not directly accessed.

References

Saturday, November 13, 2021

International research and new Privacy laws

A lot of research subscribes the following format: find an answer first and then worry about the consequences later. There are books, papers, and movies dealing with this. In fact, this is a perennial Science Fiction topic. Henry K. Beecher once said that "the problem was not that researchers were malicious or evil; rather the problem was they manifested thoughtlessness or carelessness."

The way research has been performed changed throught the years. Depending on the chosen topic, American scientists have to comply -- grungly at times because some think it hampers their style -- policies set forth by their institutions and funding agencies such as (small sample otherwise we will be here all day) Health Insurance Portability and Accountability Act (HIPAA), the Family Educational Rights and Privacy Act (FERPA), the Gramm-Leach-Bliley Act (GBLA), the Federal Information Security Modernization Act (FISMA), and the NIST sp 800-171. Doing research with international partners makes life even more interesting: now we need to know which rules these partners have to play under. That goes doubly so when dealing not only with security but specially privacy ones, which cover the test subjects and their data and the researchers themselves. Of these, the most famous is the European General Data Protection Regulation (GDPR), but it is not the only one. In a NFS-funded international experimental testbed project I worked on, I had to deal with GDPR, the Brazilian General Personal Data Protection Law (LGPD), and the Japanese Act on the Protection of Personal Information (APPI).

One of the most important points in these laws is the scope: they are applicable if you are intentionally trying to provide a business or a service to someone residing (not necessarily a citizen) in Brazil, the European Union, or Japan. In our case, we were attracting researchers -- from principal investigators to grad students -- in those countries; therefore, we checked that box.

Here are the most interesting differences between the 3; in blue are where one regulation is more restrictive than other. The idea here is that if you need to deal with all of them, plan to satisfy all the blues.

Shameless plugin

In October 18–21 I had the opportunity to participate in the NSF 2021 Cybersecurty Summit, which is run by TrustedCI, both as a presenter and as a workshop co-chair (fancy term for catherder). The talk I gave was called "GDPR, APPI, and LGPD: don’t go sciencing internationally in your experimental testbed without knowing them," which covers some of the topics raised in this article. But, don't take my word for it! They made the videos available in Nov 2, so you too can enjoy seeing me realizing the 1h talk I prepared needs to be presented in less than 30 minutes.

I know you can't see it, but I am sporting an ioactive t-shirt; no, it was not because it was laundry day.

References

Security and Privacy Certifications and CPEs

This may not sound like a security/privacy-related topic, but there is more to these professions than wearing hoodies with 'l337 H4ck3rz' written on its back.

Early this year I earned the ISACA Certified Data Privacy Solutions Engineer (CDPSE). They do issue pretty badges to put in your website to impress your friends and be the life of the party:

The thing is, if you want to keep your hard earned (and usually not cheap) professional credentials, you need to do some professional development, which is measured using Continuing Professional Education (CPE) credits. Before you put your surprised face on, understand this is not specific to IT and InfoSec industry. The first time I learned about that was in the medical industry: over there it is called Continuing Medical Education (CME), but the principle is the same.

ISACA is not the only place requiring CPEs; if you have a (ISC)2 (I am looking at you, CISSP holders) or CompTIA certification, chances are you too need some CPEs. Given the cost of the CISSP, the last thing you want to do is lose it because you did not spend the time to get the required amount of CPEs. For the sake of this discussion I will focus on how ISACA handles CPEs. According to this certification requirements, I need

  • 20 CPEs annually
  • 120 CPEs every 3 years

Two things I would like to point out:

  1. The 3 year cycle you need to earn the 120 CPEs start in the year after you are certified. So, for me that would be 2022 to 2024.
  2. You need to earn the CPEs for a given year X in the year X - 1. In my case, I was certified in 2021, so I need to earn and submit my CPEs in 2021 for the year 2022.
  3. The math is a bit scary: you need a total of 120 CPEs in a 3 year interval; that means an average of 40 CPEs/year. If you have done the bare minimum -- 20 CPEs -- each year for years 1 and 2, in the last year you will need to come up with 80 CPEs. At the time I wrote this, my CPE count looks like this:
    I covered the bare minimum for 2022 but it would be better if I come up with another 9 CPEs.

So, how do we earn some nice free-range CPEs? ISACA does publish a doc on how to earn them. Some you can earn by doing things associated with them, like going to their conferences or taking their training classes. But you can also eanr them through other activities such as

  • Teaching / Lecturing / Presenting: This is how I got most of my CPEs this year, thanks to the talks and the workshops I gave. You can earn a lot of them.
  • Publication of Articles, Monographs and Books: Last article I wrote that was published happened last year, so it does not count. But, maybe you did something, as it earns you a lot of CPEs.
  • Self-study Courses: I took a class -- Certified Cyber Security Architect -- in March of this year, so I could add some CPEs. I am also taking another class right now; I will contact the instructor to see if I can get CPEs trough it too.
  • Non-ISACA Professional Education Activities and Meetings: In other words, attending monthly meetings, say the ISSA one, count as a way to earn a few more CPEs. Not much (I think one per meeting) but every little bit counts.
  • Passing Related Professional Examinations: I did not realize I could also earn them this way, so I have a few more to add. Two CPEs per examination add up.
  • Vendor Sales/Marketing Presentations: Suck it up and watch that infomercial webinar!
There are more events but these are the ones I have used.

Bottom Line

There is no excuse for you to lose a certification due to lack of CPEs! If I can do it, so can you!

Tuesday, November 2, 2021

No Bsides Zurich this year

This post is a bit of a vent. On Sept 30th (the deadline; talk about waiting for the last moment!) I submitted a talk to the Bsides Zurich this year. Yesterday I received an email saying, well, they were cancelling it. It seems they were planning on, instead of having a virtual or live event, publishing a book. But, since they did not have enough articles they chose to can it. It seems that I misunderstood their CFP. Bummer

My gripe: I understand they would not want to have a live event because of COVID, but why not have a virtual one in addition to the book? On well. Better luck next year.

Sunday, August 8, 2021

Is your phone plotting against you?

There is say that states when cats nap, they dream of plots to kill their "owners."

What about phones?

Ok, maybe phones are not planning on killing you, and should have grown out of their arsonistic phase, but that does not stop them from being up to no good. You see, they are always sending information about your location, what you are saying (Alexa and Siri, I am looking at both of you), what you have searched for or bought, and even pictures and videos (think taking a protored exam at home, but creepier). Bottom line is smart phones are little snitches: they compromise your privacy by design.

There are things that can be done to minimize the impact of this intentional data exfiltration both in a behavioral and technological level; that was the topic of the workshop "Practically Protecting Phone Privacy" we presented earlier today at the DEFCON 29's Crypto and Privacy village. I was going to write about it after we were finished, but after 4h of almost nonstop -- we just took 10min breaks every hour -- I was braindead. Understand I was up since 5pm checking and rechecking the slidedeck, the timing, and the demo. The latter did not happen because of network connectivity issues (we were doing this remote). Good thing we planned for that and had a set of screen captures -- phone, Android tools, etc -- to show the different steps. That does not make me happy since I bought 2 phones specifically for that part of the workshop, but, as one of the slides in our presentation says, we need to get out of our comfort zone.

We created a github repo to put the docs and links associated with this talk. It started as instructions to set phones up before the workshop but we plan on adding more stuff. As a result, it is a work in progress (the nicer term is living document).

Right now I still feel braindead (as my typos demonstrate) so it will take a while until I add something clever to this thread. Still, at least I would like to thank the folks at Crypto and Privacy village for having us and putting up with our troubles and shenanigans.