Thursday, December 1, 2022

Phishing Is Too Easy - 5: Season to be Scammed Edition

Good news everyone: There are phishers who take pride in their work

We continue our series on phishing emails. I am glad to say a phisher heard my plead and stepped up to the challenge before Black Friday ended!

We have here an email that claims to be coming from American Express which states there is a problem in my card and I need to click on the link to find out. Let's ignore the fact of wether or not I have an American Express card or this article would have ended right here. The timing was good: lot's of people are going crazy purchasing milliong of trinkets online, and then they receive an email saying their card has a problem. Did they go over the limit? Was it's information stolen?

Good show old boy!

If I had such a card, what should I do next? The answer depends on how much effort we want to put in this:

For the impatient

You can't see in the picture but the From: field looks like this:

From: American Express MyCredit Guide <transunion@em-tuci.transunion.com>
Why would TransUnion, a US consumer credit reporting company, be sending emails for American Express? This should be enough for us to immediately drop this email and move on.

For the willing to spend a bit more time

First of all, when in doubt of whether a suspicious email is legit or not, find the official contact number/email of the company in question and reach out to them. In this case, I did call them. American Express said if they send an email, it will contain

  • Your name.
  • The last 4 digits of your card.
This email only contains the first name, so per American Express, it is at best suspicious. They did ask me to forward it to spoof@americanexpress.com, which I did.

For those with time to deep dive and ponder on the implications

Some of you may remember that TransUnion suffered a data breach recently. What if this data is being used to create targeted phishing email? And, what if the criminals are able to either impersonate transunion email addresses or still have access to their servers so they can send emails through their servers? To answer that we need to look in the email header:

ARC-Authentication-Results: i=1; mx.google.com;
       dkim=pass header.i=@em-tuci.transunion.com header.s=scph0919 header.b="ou/BSRUG";
       spf=pass (google.com: domain of msprvs1=19329inrhx0ms=bounces-266758@bounce.em-tuci.transunion.com 
designates 147.253.210.36 as permitted sender) smtp.mailfrom="msprvs1=19329inrhX0MS=bounces-266758@bounce.em-tuci.transunion.com";
       dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=em-tuci.transunion.com
Return-Path: <msprvs1=19329inrhX0MS=bounces-266758@bounce.em-tuci.transunion.com%lt
Received: from mta-210-36.sparkpostmail.com (mta-210-36.sparkpostmail.com. [147.253.210.36])
        by mx.google.com with ESMTPS id 62-20020a630141000000b004778207ac4dsi7561754pgb.396.2022.11.26.12.06.50
        for Clueless Sheep
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Sat, 26 Nov 2022 12:06:50 -0800 (PST)
Received-SPF: pass (google.com: domain of msprvs1=19329inrhx0ms=bounces-266758@bounce.em-tuci.transunion.com designates 147.253.210.36 as permitted sender) client-ip=147.253.210.36;
Authentication-Results: mx.google.com;
       dkim=pass header.i=@em-tuci.transunion.com header.s=scph0919 header.b="ou/BSRUG";
       spf=pass (google.com: domain of msprvs1=19329inrhx0ms=bounces-266758@bounce.em-tuci.transunion.com designates 147.253.210.36 as permitted sender) smtp.mailfrom="msprvs1=19329inrhX0MS=bounces-266758@bounce.em-tuci.transunion.com";
       dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=em-tuci.transunion.com
X-MSFBL: fXbaPXh+ne/E8ZM3Y6OyFt9TLlavvIujqeENrG6IrbY=|eyJyIjoicmF1YnZvZ2V sQGdtYWlsLmNvbSIsIm1lc3NhZ2VfaWQiOiI2MzgxZGE3MTgyNjM0YmI3ZmY3ZiI sInN1YmFjY291bnRfaWQiOiIwIiwiY3VzdG9tZXJfaWQiOiIyNjY3NTgiLCJ0ZW5 hbnRfaWQiOiJzcGMifQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=em-tuci.transunion.com; s=scph0919; t=1669493210; i=@em-tuci.transunion.com; bh=g54YI3MysS1MVd8EV8xjgfkc97E2Z2epcQAJzoXhCkw=; h=To:Message-ID:Date:Content-Type:Subject:From:List-Unsubscribe:
	 From:To:Cc:Subject; b=ou/BSRUG3cUbJKbYUZ1LVr3J0Z3xP7nFJPUjPutaxPAlyQU2bd2vFDbfNHxdU0LbB
	 HxEwc9YzSTrKnrbFfjcLwSxfZk48k6br1t4DI9fsDgWAimdohpxIGKK6ukD2NE1q/L
	 SESZw9WVeXNvoEVjsYIPh67accGucYF32laIH8ICsqeopmxSoaxsrjHBa/MBjqYZAz
	 8r+jHG+Ilr/QzlJ0Lq5rGA/hJGnHR3lPbkuVRFBsrnV9841IbsIpQDVOUdW172sQbQ
	 zZ+JErYKYYvpwmjqd6A4XMPu3TG9QcymMjHHYqcXRmtL4OdKzB8GKtksDI4uLakZkw
	 8HR0NVWvPUjzQ==

At first glance it seems the email came straight from TransUnion, specifically from the host called em-tuci.transunion.com. But, then we find the most interesting entry in the above header exerpt (which I highlited):

Received: from mta-210-36.sparkpostmail.com (mta-210-36.sparkpostmail.com. [147.253.210.36])

It seems this email came from mta-210-36.sparkpostmail.com, whose IP (147.253.210.36) has been whitelisted by bounce.em-tuci.transunion.com as a sender. From there it ends up in the Clueless's gmail account relying on transunion's server's relationship with google's.

But, who is SparkPost?

Short version, it is a mass emailing service. They seem to be well-known enough for Microsoft to have instructions on how to access them using a connector from within Azure. Does that mean they were compromised or the attackers obtained the TransUnion's credentials to use this service?

Some kind of Conclusion

Even though this phishing email was much more well thought out than that insult mentioned in the last entry of the series, if you stop and examine it -- without first clicking on its links -- you can still identify it as such rather quickly, without needing to tear down through its raw contents. Don't get me wrong: doing that is fun, but if you are trying to go trhough your daily routine and see this email, in less than 5 minutes you can make a call of whether it is legit or suspicious.

Ok, more if you have to wait on the phone listening to elevator music to talk to a company to verify if they sent said email.

Friday, November 25, 2022

Phishing Is Too Easy - 4: Season to be Scammed Edition

It is Black Friday! And We are in the Season to be Scammed! A few moments ago (I am typing this as fast as I can) I received the following phishing email:

Phishing email pretending to be dicks sporting goods. Description of what to look out for is written below

It's call to action is the claim Dick's (insert jokes here) Sporting Goods decided out of the blue to give me a Yeti cooler if I just click on the "Confirm Now!" link. I usually would spend the time (see the last phishing article I wrote) and look at the email's source to see if it has any interesting teltale signs of phishing. But, this phisher is so lazy he does not deserve a deep dive on the email. So, let me count the ways this is a scam:

  1. Why would Dick's want to send me a cooler? They do have a store here but I make my point not to go there. So they do not know I exist... unless they bought my name off a list. If that is the case, I feel I should ignore them even more.
  2. Why is the name in the return address "Dicks SportinGoods" (blue line) instead of "Dicks Sporting Goods"?
  3. Why is the domain of the return address celimopafeseda (red line)? I could say that I could not find that domain registered anywhere I bothered to look (spent some extra time I really did not need to for this article), but let's be honest: this has nothing to do with dicks.
  4. If I had spend time and looked at the email's header, I would have seen it was sent through outlook.com. But I will not. I am not saying mailed through Outlook is a telltale of a phishing email but I do not like how the path it took while inside their network is obscured. Still, short post this is.

As a result, I think we can safely label this as phishing and move on.

I am disappointed for the lack of pride this phisher has. Do you think some other phisher will redeem my faith on them or is this the best I can expect this Friday?

Saturday, November 5, 2022

On the rise of work-at-home employee tracking

When COVID became a global pandemic, many companies which before have frowned upon teleworking asked its employees to work from home whenever possible. That raised a concern: how would managers verify their underlings were spending their work hours doing the tasks assigned to them? There are many ways to track the time of employees, but the one that has increasingly become the most popular is employee monitoring software. A survey of 1,250 employers by Digital.com found that 6 out of 10 employees require monitoring software for its remote workers.

Why Are Employees Being Tracked?

Employers want to manage their workforce and understand how employees are spending their time. They see employees taking a break from their work tasks and using social media or dealing with their family as potential drain on their productivity, or time theft. According to Digital.com, more than half of the monitored employees spend more than 3 hours every day on non-work activities on company time.

If a business offers consulting services, it has a vested interest in logging its workers' time with a customer so it can properly bill said customer. Also, FLSA requires employers to have accurate records of each hourly employee, and keep it for 3 years.

What is Being Tracked?

Even though this kind of software has been called an extension of traditional time-tracking systems, what it records is more expansive than simple time-tracking:

  • Random screenshots
  • Location (using GPS)
  • Website tracking
  • Log emails
  • Any sounds in the immediate area using the device's microphone
  • Camera
  • Anything that has been typed (keylogging) and any mouse movemens (mouse logging).

Privacy Concerns

"Most employees are OK with (installing employee tracking software). As long as you tell the employee you're implementing it, it's entirely legal" according to Enzo Logozzo, director of sales and marketing for 365 IT Solutions, Toronto. That is not necessarily the case.

  • Per GDPR, consent here is not freely given as there is the risk a refusal to consent to have the software installed may result in the employee being fired. Canadian news media reported recently about a school janitor in Alberta, Canada, who refused last fall to download a mobile app that would help her employer confirm workers were on the job where and when scheduled. She was fired weeks later.
  • While the Canadian privacy law, PIPEDA, states that collection and disclosure of personal data by a company from its employees without their consent is allowed on certain situations, it becomes the onus of the company to justify the collection of data was done for a specific business purpose.
  • Tradionally, American privacy laws such as CCPA are much more lenient towards the business. However, employee tracking software can place companies at odds with other federal regulations. We must expect some of those working from home will on occasion contact their children's teacher or doctor during working hours. Recording of these conversations conflicts with HIPAA, and FERPA.
  • Using a computer built-in microphone may be subject to state wiretap and eavesdropping laws.

Other Issues

In addition to legal issues, aggressive employee monitoring negatively affects business:

  • Employees lose trust in the company. 14% of companies have not informed employees they deployed this software.
  • Once workers find out employee tracking is in use while they work at home, their stress level increased. According to a study run by the insurance company Colonial Life, 26% of the employees said stress was making them less productive and 15%reported feeling less engaged with their job. That is no surprise, as 88% of employers terminated workers after implementing monitoring software.
  • Devices running employee surveillance software are a juicy target for malicious individuals. As these individuals want to collect passwords and other personal information, attacking a computer with employee tracking software saves them time and effort.

Living with Employee Surveilance Software

Protecting your privacy as an employee

  • Ensure company issues you their computer so to minimize the chances of having personal and work data in the same system.
  • Minimize using work computer for personal applications. Ideally you should just avoid, but if that is not possible, this is the next best thing. It may help to think work computer may be taken at any time for any reason; it is theirs after all.
  • Ask if they will issue you a work phone. If not and also demand you to install their app in your personal phone, here are apps to help on that. In fact, that is one of the topics we covered in our DEFCON workshop and something we recommend when dealing with IoT devices. Otherwise, get yourself a dumb phone and show that is the phone you have.
  • Put work computer/device in a separate network than your home one. This may require technical help; VLANs are a great start but the sky is the limit.
  • Create a private location for your workspace. Ideally one that has the door in your front (behind computer). Getting a greenscreen is also recommended.
  • Assume work computer's microphone and camera are always on, so once your work hours are done, place it in a box with sound absorbing foam.
  • Some companies may offer you an exercise tracker device such as Fitbit. Politely refuse it as it records your biometric data, which violates GDPR if you are subjected to it.

Protecting your company's privacy

  • Have a clear policy outlining the justification for surveillance
  • Ensure employess understand why they are being tracked
  • Obtain consent from your employees if you are installing employee surveilance programs in their computers and phones. Note that if it is a requirement to work, it is not freely given.
  • Ensure tracking stops after working hours.
  • Hire a professional such as Privacy Test Driver to ensure you comply with relevant privacy laws and provide an environment that fosters productivity while protecting both your company and its employees.

Monday, October 31, 2022

Unintentionally helping others steal your biometric data

The pieces of the puzzle

  1. Let's start by stating the obvious: people do upload a lot of videos and images to social media showing their family vacations, new dance movies, and, yes, twerking. These files are publicly available and can be easily gathered. Do you remember the old warning about being very careful about what you share on the internet? The security and privacy concerns were about showing where you live, who are your family members, and when you will be out of your house. Thanks to advancements in AI we can add a new reason to slow down posting so much about ourselves.
  2. Biometric-based authentication is the process of authenticating people based on something you are, i.e. an unique pyshical feature -- fingerprint, iris, or retina to name a few -- instead of something you know (password) or have (token). Some of the applications are multifactor authentication and face recognition, which are used to unlock smart phones and identify people in a crowd.
  3. Deepfake is an evolution of the tradition of inserting (or removing) people in pictures and videos using cropping and blue screens. Benign results have been seen in movies like Zelig and Forrest Gump; George Orwell' 1984 talks about using that for malign purpose, namely rewrite history. The difference is that thanks to AI, deepfake is automated to the point it runs in real time. The classical example of the potential of this technology is a Tom Cruise deepfake video created by Belgian visual effects artist Chris Ume:

    It did not take long for malicious individuals to apply deepfake to create celebrity porn videos, fake news, hoaxes, and financial fraud. What about the average people? They are not famous politician, singer, or athlete; can they shrug it off saying "this does not affect me; I am too small of a target for them to have an interest on" like they have done many times before, or should they be worried? The reality is that

    • Attackers are always looking for opportunities, and will strike at the low hanging fruit.
    • The cost of the resources required to deepfake has dropped a lot in the last few years.

Let's have some fun

How can we combine that? In 2007 (yes, time flies), Microsoft identified the following as the most popular types of biometric authentication devices of the time:

  • Fingerprint scanners
  • Facial pattern recognition devices
  • Hand geometry recognition devices
  • Iris scan identification devices
  • Retinal scan identification devices
Nowadays we can do all of that using just a camera. Let's consider a few applications that are possible today:
  • Videos and pictures collected from your social media provide enough info about your face to unlock your phone.
  • Inserting you in the CCTV records of a riot is just a matter of being able to access said records and change them. Only limiting factor here is bypassing tampering detection, which is not as common as you are led to believe. Yes, we are not at the Ghost In The Shell level, where video streams were being tampered in real time at the camera level, but there is enough knowledge to make some damage right now.
  • Back to those high quality videos found in social media, they are (not may be) good enough to collect your fingeprints or ear shape. The later has been successfully used to identify people in riots while wearing masks.
  • Saving the best for last, imagine someone using deepfake, after collecting your videos for images and voice samples, to have a webconference with your children's school or doctor. I will leave to your imagination to ponder on the consequences of that. Before you say anything, the Tom Cruise video I mentioned early is now old from Moore's Law's point of view.
We could go over an example of how to do that, but that is not the point of this article. If you thought your identity and, as a result, your privacy was at risk, I think we reached a whole new level.

What can be done to minimize exposing biometric data?

Think before posting! This rule has not changed. There are some who argue that millenials and Gen Z crowd are the biggest offenders, but this is just a matter of training. If you have to post, be mindful of what is being exposed. Or, cut down the quality of the pictures a bit so the bad guys do not have a nice clean image to start with. For the images and videos you already posted, once it is out in the internet, there is no coming back.

Make protecting your privacy a priority in your life. If people are going to steal your data, make them work for it.

Further reading

Trendmicro published a great paper on the risks of exposed biometric data.

Friday, September 30, 2022

Optus and how to DevOps badly in a few easy steps

Full disclosure: I put Optus on the title because those LinkedIn articles avocate the need for clickbait to attact viewers. Problem it, I am actually going to talk about this company. But, this article is really about code development gone bad; Optus just happens to be the perfect example of, in the words of Jeremy Clarkson, what could possibly go wrong.

We are Agile!

In earlier, simpler times, the recommended software development lifecycle model (SDLC for you acronym addicts) was the Waterfall Model. There are many places which describe it better than I could ever do, so suffice to say that it is linear and starts with the idea, then goes to the design, and then a few steps including coding and testing until it is deployed and goes to the maintenance mode. In other words, you start with an idea and then ends up with a product.

Making the code secure, or implementing (Buzzword time!) privacy by design was fairly easy if the security and privacy team was involved fromt he get going, as that was just another well-defined step.

But, what if the product needs to be changed? As in not just a patch but feature request or something that requires a new library or user interface redesign. You need to go back to the start.

You can say it is a bit rigid, and many people agreed with you. Next step was modifying the model so you could hop back one step or two, and that started to get messy. The bottom line is it does not take changes well. In many fields that is completly fine. However, for code which is always changing and put into production as soon as changes are done, like in a website, it can slow down delivering a working product. In some industries, who puts it out first, even if it is not perfect, wins. So, we need something better.

We evolved into the Agile model, which. as a friend taught me, is also called the "Never Finished Model." What the joke implies is that this model is designed to handle changes quickly and deliver a working product even if it is not perfect. The reason is that you can improve on it later once you have some feedback from customers.

The following picture shows a typical Continuous Integration/Continuous Deployment (CI/CD) pipeline, which is a trademark of using the Agile model in code development. How do we account for security and privacy here? DevSecOps places security controls in the CI/CD process of DevOps. Note the two red boxes: they are the points where we add security testing to the cycle. one is for the Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST). The red arrows indicate that funny business they find is then send to something which then logs and reports them by creating tickets or sending emails or something else. This is of course, ideally supposed to be done in conjunction with training developers in secure coding, (Buzzword Alert!) pivacy by design, and whatnot.

In reality, some companies/developers which should know better decide that slows them down and hampers their style. In other words, they nee to be putting new code out with new features, and privacy and security are not features but

Enter the Optus

Singtel Optus Pty Limited, a.k.a. Optus is the second largest wireless carrier in Australia. In the last week of September 2022, Optus reported that on 22 September 2022 it was victim of a very sophisticated cyberattack by members of a criminal or state-sponsored organization. This attack resulted in a major personal data breach, where the names, dates of birth, phone numbers, email addresses, street addresses, drivers licences, and passport numbers of both current and former customers was leaked. Optus chief executive Kelly Bayer Rosmarin said that they "are not aware of customers having suffered any harm."

Insert here the videos of a guy in a hoodie in a dark room and computer screens showing random Linux output.

What does this very sophisticated cyberattack have to do with coding?

Glad you asked.

You see, later on it was found Optus had an unauthenticated API, http://api.www.optus.com.au, that released all of the personal data it stored, not only of current but also previous customers (there is the case of someone who has not been an Opus customer for the last 14 years and not only received an email from them about the breach but also started to be flooded with spam). Unencrypted.

Optus detected the event when the attacker started hitting the AIP hard.

So, the questions are

  1. Why did it have an exposed API without some kind of authentication? Perhaps that was originally done to allow testing of the API more convenient by developers. I myself have seen that in the wild. When developers/DevOps from the environment in question were asked to at least limit access to a network only reacheable from behind their firewall, they shrugged it off saying the VPN (which is not a solution but sure is an improvement) was too cumbersome to use from their personal laptops.
  2. Why was the connection to said exposed API unencrypted? Do you remember when we said that DevSecOps places security controls in the CI/CD process? That probably would have caught that: the SAST would have noticed the unencrypted connections in the code; the ones I have used before would bark at unencrypted traffic (and hardcoded passwords, which was not the case here since no passwords were used). In the real world that does not happen as much as people believe. In fact, it is too common to hear that devsecops slows down the of devops' work.
  3. Why was the personal data stored unencrypted? Once again, convenience. Maybe when that was recommended, it was then turned down because developers argued it would slow the response time of the system. Once again, SAST would have caught that.

Clearly there were poor security practices at play here. Perhaps DevOps security and privacy training never happened, or SAST/DAST was never implemented in the SDLC chain. Usually that happens because they are considered cost centers in business that, as mentioned earlier, slow down progress. Remember we mentioned that automated security testing will create tickets developers will have to deal with in addition to the other tickets they already have on their plates.

Post Morten

Don't be that guy!

  • Privacy by design would not have allowed this kind of code to even make into the repo.
  • Encrypt the traffic to the API, period. Ideally that should be done at the API level. I know some people will put a Nginx proxy in front of the unencrypted API (using kubernetes or docker), and I cringe about that: it is an improvement from the Optus setup but not by much.
  • Encrypt your data at rest. Yes, that is specially important for personal data, but it is a good habit regardless.
  • All connections to an API should be authenticated by default. If you have a query, say list status, you want to make available unauthorized to users, spend some serious time thinking on the consequences.
  • Ensure your CI/CD process has proper security controls. If DevOps is being swamped with the tickets generated by these controls, this may either mean they need more security and private training, or the controls need better tuning, or the external code/libraries you rely on are not as well written as they should. That is how BadUSB and many of the IoT issues came into being.

Monday, September 26, 2022

Phishing Is Too Easy - 3

Last week I received another traditional phishing email; apologies for the lack of images because my email account is setup not to load externally attached pictures. Here it is, with my address removed:

Phishign email disguised as an invoice with attached PDF pretending to come from Norton

Yes, this is pretty much a variation of the last one I commented on months ago, namely:

  • It is an invoice for some product, in this case it implies to be some kind of Norton product.
  • It creates a veil of credibility by alluding itself (blue box) in a rather half-ass way to be related a real company. Note it claims to be "Norton Support LLC," which I have no idea who it may be. Since the average person probably heard of Norton, who sells an antivirus and other security products, it is easy for said person to associate both.
  • Still on the credibility standpoint, the sender address is supposedly from quickbooks (I did not bother to check the header). Yes, a large company right Norton would not be using quickbooks to send its bills. However, if you have to deal with purchasing you probably have seen invoices from smaller business which use the online quickbooks site; when they send their invoices, their invoices will have "<quickbooks@notification.intuit.com>" as the email. But, we hope they will look more like "Something Of Doom LLC <quickbooks@notification.intuit.com>" instead of "Intuit E-Commerce Service <quickbooks@notification.intuit.com>"; I think the later is not the default value, but it sounds credible enough.
  • To create the urgency, the invoice is for $800. That will make someone's heart beat a bit faster and immediately want to open the attached PDF file (red box) to find what this invoice is all about. This is a bit lazier than the last phishing email we posted about as some mail services will disable attachments with macros in hope to block malicious payloads. However, most of the mail services do not do that; mine could not be bothered and told me if I want to see it, and be properly infected, I need to have Adobe Acrobat Reader (green box). Since my mail service does not automagically open anything, I have some extra time to read the email and decide what I want to do next.
  • It provides a number which may be tied to the phisher (VoIP?) so if the frantic recipient of the email calls, the phisher (we called him Peggy in the last phishing post) can then social engineer his way into the victim's computer.
  • The return address is a typical quasi-randomly created Gmail one; they could not be bothered with making it sound like it came from a billing department as it claims to be.

How effective it is? I think it depends on where people will focus on. The phishers hope their marks will see the value of the invoice -- $800 -- and immediately open the pdf to find out what is going on. The best thing to do here is stop -- but not stop/drop/roll as you are not on fire -- whenever you see something suspicious, specially when it claims to be urgent. Then ask yourself if you expected an invoice from Norton. Then look at the email addresses and see if they are not overly suspicious.

Remember: phishers are lazy, and they hope you are equally lazy!

Saturday, September 17, 2022

There and back again: DEFCON 30

Second slide in the workshop reminds the audience we had put instructions on github for what to do before attending the event.

No, I did not postpone posting about my trip to DEFCON30 until now because I did not have anything to post this month. The truth is I was slacking. There, I said it.

This will be a bit of a post morten of our workshop. Will this post have any useful info? Don't hold your breath; what I can promise is there will be many opportunities to laugh at our expense.

The Plan

For those who read the announcement for our workshop at the Crypto and Privacy Village, you know that there are two authors -- Matt and yours truly -- who put together the mess without killing each other; the fact we had half a continent between us probably helped.

Originally, the plan was to start with an explanation of why this phone privacy thing was so important and then get show how to do it. Ideally people would have read the announcement, followed our instructions, and show up with a phone ready to be configured. While one of us would be on the podium, the other would then be helping the audience.

After we had the entire workshop done and did a few dry runs, we started thinking: how many people will bring a phone that meets the requirements? Probably not many -- not many people have spare phones that can take CalyxOS or LineageOS in their kitchen drawer -- and we will not be able to bring enough loaners as all the resources in the workshop are coming out of our own pockets. We could just shrug it off and tell people "Hey you did not bring a phone, so we will bore you with screenshots."

Thing is, we had taken a lot of screenshots of everything we would be showing in the phone, in case we would not be able to share the phone screen or point a camera at it. So, this was an option but we felt that would detract from the workshop; instead of being something interactive it would be no better than watching a video.

We needed a plan B.

What if we provided an emulator? It will not do everything a real phone can but it will allow the audience to follow along on their laptops. Since we were going to focus on CalyxOS (we had only an hour to run the entire workshop; compromises had to be made), we then decided to create that image, make it available somewhere, and then update the wiki with instructions on how to use it. We also asked the Crypto and Privacy Village (CPV) people to add a single line in the workshop announcement, indicated with a green line in the picture below, to tell people they should install Android Studio in their laptop.

Wrokshop announcement, with the line 'Alternatively, a laptop with Android Studio installed' added to it, indicating you may want to install it if you do not have a phone to use in the hands-on bit

The plan was to have everything finished two weeks before the event and then take the last week to practice, and ensure we had a reliable way to hand out the emulator images.

Things did not happen according to the plan.

Matt was able to go to DEFCON from the beginning of the event; I do not know if he also was able to stop by BSidesLV. I, on the other hand, was a bit more time constrained: I flew the first flight on Friday and was going to return on Saturday after the workshop. In any case, we were going to try to attend as many events and talks as possible, and meet up with people we have not seen in ages. I also planned on volunteer to the CPV.

What really happened?

  1. Building the CalyxOS phone image was not as smooth as we hope for. In plain English, I could not make it work. I had no issues building LineageOS ones in my docker build environment -- if someone reminds me I can post instructions on how to do that later -- but CalyxOS was fighting me all the way. Fortunately we were working in parallel and Matt was able to make it work.

    I will let Matt post how to create the CalyxOS image with all the apps already installed in his blog, as he is the one that made it work. In fact, it worked so well, he used that instead of a real phone during the hands-on part of the workshop.

  2. We spent too much time trying to come up with a clever way to deploy the phone image. After days of frustration we came up with a simpler way to do that, wrote the docs that worked whether you had a Linux, Mac, or Windows laptop, and put it with the image.
  3. The emulator stopped working. I do not know why but it went on strike. More frustration ensued. Was it the emulator itself or the image? Once again Matt rose to the occasion and made it work.
  4. We also found out it would take too long to download the image we built using the DEFCON public network. Fortunately we had a bunch of USB drives and decided to put in each, formatted in some Windows file system so all 3 OS could mount them, the image and instructions.

There are probably more things that went wrong, but I cannot think of them right now. Bottom line is we spent most of the time that week working on these bugs. And, we made it work.

Showtime

The CPV people did a great job. Everything was working smoothly on their side. I did most of the overview and then Matt took over for the technical part:

Matt Nash presenting the hands-on part of the workshop. Audience is spaced out following the social distancing requirements

You will note on the above picture the audience (picture was taken from the back out of respect) has set some chairs apart for social distancing's sake. I then came back from the podium sporting one of my favourite shirts (bonus points if you recognize it) with the final comments and we then took questions. After it ended, Matt was surrounded on the podium with members of the audience for a long while until the Defcon Goons kicked us out.

Mauricio Tavares on the podium spreading lies and misinformation while sporting the classic Oregon Trail shirt.

Thank you for all the fish

  • Avi Zajac and the rest of the Crypto and Privacy Village crew for not only having us there but making the event possible. And the badge. And the shirt (I am afraid of wearing it out because it is nice). And keeping the Goons at bay. And the sticker!
  • The NCC Group for mentioning us in its August announcement.
  • DEFCON for, well, being defcon. I do with I had more time to see it all this year instead of being in a hotel room trying to get all working. But, it was all worth in the end.
  • CalyxOS for trying to make a more secure and private Android distro easier to install. There is more around this line item, but I am getting ahead of myself.