Saturday, December 31, 2022

TransUnion data breaches, GDPR, CCPA, BIPA, and Ramirez

3 round icons representing CCPA, GDPR, and BIPA each of them on their sides implying things are not as they should be

TransUnion LLC, one of the three major credit reporting companies in the United States, also has branches in every continent but Antartica. It is said that just in the U.S. the personal information and credit histories of some 200 million consumers is stored in their servers; I have not been able to find veritable information regarding consumers located outside the US.

Some of you may remember that TransUnion recently suffered a data breach (I will be using the GDPR definition of personal data breach which is "a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed").

"How recently," comes the voice from the back of the room, "which one are you talking about?"

Good question; it is hard to keep track of them. Let's go over a few of them and later see what we can learn from them.

The Events

  1. In 2005 -- a long time ago (in dog years) -- TransUnion lost a laptop containing personal data from more than 3600 US consumers. The Chicago-based company offered up to one year of free credit reports to the affected customers. At the time -- one must remember these were pre GDPR/CCPA/BIPA times -- some of the main questions raised were
    • Were credentials to access the TransUnion databases and other systems also exposed?
    • TransUnion chose to report the data breach. At the time there was no real requirement to do so: the California Senate Bill 1386 of 2002, one of the first security breach notification laws, specified a criteria corporations should use to determine whether they were required to report the incident: if they answered "yes" to every single one of the following questions, they must report the breach:
      1. Does their data include "personal information" as defined by the statute?
      2. Does that "personal information" relate to a California resident?
      3. Was the "personal information" unencrypted?
      4. Was there a "breach of the security" of the data as defined by the statute?
      5. Was the "personal information" acquired, or is reasonably believed to have been acquired, by an unauthorized person?
      The late Alan Paller, director of the SANS Institute at the time, warned this test provided a legal loophole for companies not to report data breaches since all 5 conditions have to be satisfied before a report is required.

    So TransUnion is very popular this month, this time in due to a larger issue than possibly being used to send phishign emails.

  2. During the Summer of 2019, the personal data of some 37000 Canadians being held in TransUnion servers were compromised. Note that the Canadian Digital Privacy Act, which ammended PIPEDA and provided mandatory breach notification requirements, had become law 4 years earlier. Also, GDPR and CCPA had already become law.
  3. On March 12, 2022 ITWeb broke the story of a data breach, which caused TransUnion to admit that attackers had indeed stole 28 million credit records. At first it was believed that more than 3 million South Africans and businesses such as Mazda, Westbank, and Gumtree were affected. The Brazilian group who claims responsibility for this act, "N4ughtysecTU," state it gained access due to a poorly secured (password "Password") TransUnion SFTP server. TransUnion later stated that more than 5 million consumers were actually affected and once again offered a period of free credit reports to the affected customers.

    But Wait! There is more!
  4. On November 7, 2022 it reported to the Massachusetts Attorney General about a data breach that could involve 200 million files profiling nearly every credit-active consumer in the United States. On the same day, TransUnion also sent out data breach letters to all individuals whose information it believes was compromised. As this is still developing, the true impact is yet to be learned.

OK, I will stop here. If they had another data breach between Nov 7 2022 and the time this was published, it should not affect the point of this article.

The Outcomes

According to GDPR Recital 75, a personal data adverse effects to a person (individual) include loss of control over their personal data, limitation of their rights, discrimination, identity theft or fraud, financial loss, unauthorised reversal of pseudonymisation, damage to reputation, and loss of confidentiality of personal data protected by professional secrecy. So, if TranUnion was an European company or people living in the European Economical Area (EEA) were affected by this personal data breach, as the data controller it would have to submit the Personal Data Breach Notification to the Supervisory Authority should be done within 72 hours unless there is no risk to the freedom and rights of a data subject. In this case, they better be reporting. The next step would be to inform all those who were possibly affected about what happened, what are the consequences to their customers, and what TransUnion is doing about it. Of course, those affected should be expected to file complains with their regional Supervisory Authorities (Art 77).

In The United States things are a bit different. U.S. Supreme Court’s 2021 decision in TransUnion LLC v. Ramirez stated that only those that can show concrete harm have standing to seek damages against private defendants. How will victims of a personal data breach prove their personal information was stolen and disclosed by negligence of the company holding this data, and as a result a violation of American consumer protection and privacy laws such as California Consumer Privacy Act (CCPA) and the Illinois Biometric Information Privacy Act (BIPA)? Compare that with GDPR already mentioned article 77 and recital 141, which requires only the data subject (i.e. the victim in this case) considers that his or her rights are infringed or "supervisory authority does not act on a complaint, partially or wholly rejects or dismisses a complaint or does not act where such action is necessary to protect the rights of the data subject."

With that said, it is possible that will change. Given that the US government and the European Union are currently actively working together to establish a new EU-US data flow deal (PrivacyShield 2?), one must wonder how they will balance this Supreme Court decision with GDPR. Which one will have precedence?

Fun Facts

  • I started this article mentioning the phishing campaign they were possibly being used to launch. What if that is related to this data breach? I mean, if your attack has been successful and you are already in the final (Actions on Objective) stage of the cyber kill chain and taking your time to hoover the victim's data, why not see what else you can do while there to pass time?
  • In addition to its main line of business, it also offer services to help companies "protect and restore consumer confidence" after a data breach (they do not list an office address there). In fact, they title themselves as the "One-Stop-Shop Incident Response Solution."
  • I made those round images representing the 3 regulations mentioned here because I did not have an interesting image to put in this article. They turned out nice, so expect me to make more and use them in future posts. You have been warned!

Wednesday, December 21, 2022

FBI: Use ad blockers to protect against brand impersonation

Today the FBI just announced, cyber criminals (which are easily recognizable according to the news and many websites for their predilection to wearing hoodies even in the summer) "are using search engine advertisement services to impersonate brands and direct users to malicious sites that host ransomware and steal login credentials and other financial information." Well, there are two parts for that:

Search engine advertising services

We are talking here about Search Engine Optimization (SEO), where you do magic tricks to move your website as close to the top search results since most people will not look more than 2 search page results for something. There are thousands of companies who make money helping businesses with this, including courses, Ez-Button products, and services ("give your url, what do you think you do, and we will take care of the rest for a price). What they are mentioning here is weaponization of that, which has been known as SEO poisoning since 2020. An example of that is when it was used to distribute BATLOADER malware.

Brand Impersonation

This is a traditional phishing tactic and relies on techniques such as (not exhaustive list):

  • typosquatting, which creates a fake website whose domain sounds close enough (within a typo or two) to that of a well known website. They pray on people like me, who mistypes a lot: if the browser returns a page that looks like the one victims expect instead of an error page, they may never noticed they are in the wrong site. This kind of attack is old enough -- yet still quite effective -- to be metioned in the 1999 Anticybersquatting Consumer Protection Act (ACPA)
  • URL Shortening, which converts long descriptive links into short sometimes cute ones that provide you no idea of where they really came from. Good ad blockers will check these links against lists of known spammers and block them, as shown in the picture below where UBlock origin does not allow a shortened url identified by the Perter Lowe's list of known domains serving ad content, tracking users, spyware servers, and occasionally malware and other nasty purposes.
    go.usa.gov being blocked by UBlock origin
  • IDN homograph attack, an attack where tsome of the characters in the url of a website are replaced by similar (think 1 vs l) characters, or those from a different alphabet that look the same in a HTML-formatted email. As a result this can be seen as a more sophisticated version of typosquatting.
This leads the victim to the website containing the malware (think ransomware), some way to steal the victim's login crendentials, or a combination of both.

Is this a new form of attack?

Nope.

Are ad blockers enough to stop this kind of attack?

There are no magic pills. They can only do so much. I recommend stopping and checking the url for a search engine result that smells suspicious. Some of the attacks mentioned above -- typosquatting and homograph -- can even be stopped by pasting the url in a proper text editor (think Notepad for windows or vim in Linux) that will not try to import fonts, and then just looking at them. With that said, I do use UBlock Origin myself; the picture on the top of this article is mine.

Should I panic and flail my arms while running in circles?

You could; if you do, make a video of it.

Do you have links for those apps/extensions you mentioned?

Thursday, December 1, 2022

Phishing Is Too Easy - 5: Season to be Scammed Edition

Good news everyone: There are phishers who take pride in their work

We continue our series on phishing emails. I am glad to say a phisher heard my plead and stepped up to the challenge before Black Friday ended!

We have here an email that claims to be coming from American Express which states there is a problem in my card and I need to click on the link to find out. Let's ignore the fact of wether or not I have an American Express card or this article would have ended right here. The timing was good: lot's of people are going crazy purchasing milliong of trinkets online, and then they receive an email saying their card has a problem. Did they go over the limit? Was it's information stolen?

Good show old boy!

If I had such a card, what should I do next? The answer depends on how much effort we want to put in this:

For the impatient

You can't see in the picture but the From: field looks like this:

From: American Express MyCredit Guide <transunion@em-tuci.transunion.com>
Why would TransUnion, a US consumer credit reporting company, be sending emails for American Express? This should be enough for us to immediately drop this email and move on.

For the willing to spend a bit more time

First of all, when in doubt of whether a suspicious email is legit or not, find the official contact number/email of the company in question and reach out to them. In this case, I did call them. American Express said if they send an email, it will contain

  • Your name.
  • The last 4 digits of your card.
This email only contains the first name, so per American Express, it is at best suspicious. They did ask me to forward it to spoof@americanexpress.com, which I did.

For those with time to deep dive and ponder on the implications

Some of you may remember that TransUnion suffered a data breach recently. What if this data is being used to create targeted phishing email? And, what if the criminals are able to either impersonate transunion email addresses or still have access to their servers so they can send emails through their servers? To answer that we need to look in the email header:

ARC-Authentication-Results: i=1; mx.google.com;
       dkim=pass header.i=@em-tuci.transunion.com header.s=scph0919 header.b="ou/BSRUG";
       spf=pass (google.com: domain of msprvs1=19329inrhx0ms=bounces-266758@bounce.em-tuci.transunion.com 
designates 147.253.210.36 as permitted sender) smtp.mailfrom="msprvs1=19329inrhX0MS=bounces-266758@bounce.em-tuci.transunion.com";
       dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=em-tuci.transunion.com
Return-Path: <msprvs1=19329inrhX0MS=bounces-266758@bounce.em-tuci.transunion.com%lt
Received: from mta-210-36.sparkpostmail.com (mta-210-36.sparkpostmail.com. [147.253.210.36])
        by mx.google.com with ESMTPS id 62-20020a630141000000b004778207ac4dsi7561754pgb.396.2022.11.26.12.06.50
        for Clueless Sheep
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Sat, 26 Nov 2022 12:06:50 -0800 (PST)
Received-SPF: pass (google.com: domain of msprvs1=19329inrhx0ms=bounces-266758@bounce.em-tuci.transunion.com designates 147.253.210.36 as permitted sender) client-ip=147.253.210.36;
Authentication-Results: mx.google.com;
       dkim=pass header.i=@em-tuci.transunion.com header.s=scph0919 header.b="ou/BSRUG";
       spf=pass (google.com: domain of msprvs1=19329inrhx0ms=bounces-266758@bounce.em-tuci.transunion.com designates 147.253.210.36 as permitted sender) smtp.mailfrom="msprvs1=19329inrhX0MS=bounces-266758@bounce.em-tuci.transunion.com";
       dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=em-tuci.transunion.com
X-MSFBL: fXbaPXh+ne/E8ZM3Y6OyFt9TLlavvIujqeENrG6IrbY=|eyJyIjoicmF1YnZvZ2V sQGdtYWlsLmNvbSIsIm1lc3NhZ2VfaWQiOiI2MzgxZGE3MTgyNjM0YmI3ZmY3ZiI sInN1YmFjY291bnRfaWQiOiIwIiwiY3VzdG9tZXJfaWQiOiIyNjY3NTgiLCJ0ZW5 hbnRfaWQiOiJzcGMifQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=em-tuci.transunion.com; s=scph0919; t=1669493210; i=@em-tuci.transunion.com; bh=g54YI3MysS1MVd8EV8xjgfkc97E2Z2epcQAJzoXhCkw=; h=To:Message-ID:Date:Content-Type:Subject:From:List-Unsubscribe:
	 From:To:Cc:Subject; b=ou/BSRUG3cUbJKbYUZ1LVr3J0Z3xP7nFJPUjPutaxPAlyQU2bd2vFDbfNHxdU0LbB
	 HxEwc9YzSTrKnrbFfjcLwSxfZk48k6br1t4DI9fsDgWAimdohpxIGKK6ukD2NE1q/L
	 SESZw9WVeXNvoEVjsYIPh67accGucYF32laIH8ICsqeopmxSoaxsrjHBa/MBjqYZAz
	 8r+jHG+Ilr/QzlJ0Lq5rGA/hJGnHR3lPbkuVRFBsrnV9841IbsIpQDVOUdW172sQbQ
	 zZ+JErYKYYvpwmjqd6A4XMPu3TG9QcymMjHHYqcXRmtL4OdKzB8GKtksDI4uLakZkw
	 8HR0NVWvPUjzQ==

At first glance it seems the email came straight from TransUnion, specifically from the host called em-tuci.transunion.com. But, then we find the most interesting entry in the above header exerpt (which I highlited):

Received: from mta-210-36.sparkpostmail.com (mta-210-36.sparkpostmail.com. [147.253.210.36])

It seems this email came from mta-210-36.sparkpostmail.com, whose IP (147.253.210.36) has been whitelisted by bounce.em-tuci.transunion.com as a sender. From there it ends up in the Clueless's gmail account relying on transunion's server's relationship with google's.

But, who is SparkPost?

Short version, it is a mass emailing service. They seem to be well-known enough for Microsoft to have instructions on how to access them using a connector from within Azure. Does that mean they were compromised or the attackers obtained the TransUnion's credentials to use this service?

So, is this Spearphishing via Service (T1566.003)?

If we read the MITRE ATT\&CK® entry, sounds like a very good possibility.

Some kind of Conclusion

Even though this phishing email was much more well thought out than that insult mentioned in the last entry of the series, if you stop and examine it -- without first clicking on its links -- you can still identify it as such rather quickly, without needing to tear down through its raw contents. Don't get me wrong: doing that is fun, but if you are trying to go trhough your daily routine and see this email, in less than 5 minutes you can make a call of whether it is legit or suspicious.

Ok, more if you have to wait on the phone listening to elevator music to talk to a company to verify if they sent said email.

Friday, November 25, 2022

Phishing Is Too Easy - 4: Season to be Scammed Edition

It is Black Friday! And We are in the Season to be Scammed! A few moments ago (I am typing this as fast as I can) I received the following phishing email:

Phishing email pretending to be dicks sporting goods. Description of what to look out for is written below

It's call to action is the claim Dick's (insert jokes here) Sporting Goods decided out of the blue to give me a Yeti cooler if I just click on the "Confirm Now!" link. I usually would spend the time (see the last phishing article I wrote) and look at the email's source to see if it has any interesting teltale signs of phishing. But, this phisher is so lazy he does not deserve a deep dive on the email. So, let me count the ways this is a scam:

  1. Why would Dick's want to send me a cooler? They do have a store here but I make my point not to go there. So they do not know I exist... unless they bought my name off a list. If that is the case, I feel I should ignore them even more.
  2. Why is the name in the return address "Dicks SportinGoods" (blue line) instead of "Dicks Sporting Goods"?
  3. Why is the domain of the return address celimopafeseda (red line)? I could say that I could not find that domain registered anywhere I bothered to look (spent some extra time I really did not need to for this article), but let's be honest: this has nothing to do with dicks.
  4. If I had spend time and looked at the email's header, I would have seen it was sent through outlook.com. But I will not. I am not saying mailed through Outlook is a telltale of a phishing email but I do not like how the path it took while inside their network is obscured. Still, short post this is.

As a result, I think we can safely label this as phishing and move on.

I am disappointed for the lack of pride this phisher has. Do you think some other phisher will redeem my faith on them or is this the best I can expect this Friday?

Saturday, November 5, 2022

On the rise of work-at-home employee tracking

When COVID became a global pandemic, many companies which before have frowned upon teleworking asked its employees to work from home whenever possible. That raised a concern: how would managers verify their underlings were spending their work hours doing the tasks assigned to them? There are many ways to track the time of employees, but the one that has increasingly become the most popular is employee monitoring software. A survey of 1,250 employers by Digital.com found that 6 out of 10 employees require monitoring software for its remote workers.

Why Are Employees Being Tracked?

Employers want to manage their workforce and understand how employees are spending their time. They see employees taking a break from their work tasks and using social media or dealing with their family as potential drain on their productivity, or time theft. According to Digital.com, more than half of the monitored employees spend more than 3 hours every day on non-work activities on company time.

If a business offers consulting services, it has a vested interest in logging its workers' time with a customer so it can properly bill said customer. Also, FLSA requires employers to have accurate records of each hourly employee, and keep it for 3 years.

What is Being Tracked?

Even though this kind of software has been called an extension of traditional time-tracking systems, what it records is more expansive than simple time-tracking:

  • Random screenshots
  • Location (using GPS)
  • Website tracking
  • Log emails
  • Any sounds in the immediate area using the device's microphone
  • Camera
  • Anything that has been typed (keylogging) and any mouse movemens (mouse logging).

Privacy Concerns

"Most employees are OK with (installing employee tracking software). As long as you tell the employee you're implementing it, it's entirely legal" according to Enzo Logozzo, director of sales and marketing for 365 IT Solutions, Toronto. That is not necessarily the case.

  • Per GDPR, consent here is not freely given as there is the risk a refusal to consent to have the software installed may result in the employee being fired. Canadian news media reported recently about a school janitor in Alberta, Canada, who refused last fall to download a mobile app that would help her employer confirm workers were on the job where and when scheduled. She was fired weeks later.
  • While the Canadian privacy law, PIPEDA, states that collection and disclosure of personal data by a company from its employees without their consent is allowed on certain situations, it becomes the onus of the company to justify the collection of data was done for a specific business purpose.
  • Tradionally, American privacy laws such as CCPA are much more lenient towards the business. However, employee tracking software can place companies at odds with other federal regulations. We must expect some of those working from home will on occasion contact their children's teacher or doctor during working hours. Recording of these conversations conflicts with HIPAA, and FERPA.
  • Using a computer built-in microphone may be subject to state wiretap and eavesdropping laws.

Other Issues

In addition to legal issues, aggressive employee monitoring negatively affects business:

  • Employees lose trust in the company. 14% of companies have not informed employees they deployed this software.
  • Once workers find out employee tracking is in use while they work at home, their stress level increased. According to a study run by the insurance company Colonial Life, 26% of the employees said stress was making them less productive and 15%reported feeling less engaged with their job. That is no surprise, as 88% of employers terminated workers after implementing monitoring software.
  • Devices running employee surveillance software are a juicy target for malicious individuals. As these individuals want to collect passwords and other personal information, attacking a computer with employee tracking software saves them time and effort.

Living with Employee Surveilance Software

Protecting your privacy as an employee

  • Ensure company issues you their computer so to minimize the chances of having personal and work data in the same system.
  • Minimize using work computer for personal applications. Ideally you should just avoid, but if that is not possible, this is the next best thing. It may help to think work computer may be taken at any time for any reason; it is theirs after all.
  • Ask if they will issue you a work phone. If not and also demand you to install their app in your personal phone, here are apps to help on that. In fact, that is one of the topics we covered in our DEFCON workshop and something we recommend when dealing with IoT devices. Otherwise, get yourself a dumb phone and show that is the phone you have.
  • Put work computer/device in a separate network than your home one. This may require technical help; VLANs are a great start but the sky is the limit.
  • Create a private location for your workspace. Ideally one that has the door in your front (behind computer). Getting a greenscreen is also recommended.
  • Assume work computer's microphone and camera are always on, so once your work hours are done, place it in a box with sound absorbing foam.
  • Some companies may offer you an exercise tracker device such as Fitbit. Politely refuse it as it records your biometric data, which violates GDPR if you are subjected to it.

Protecting your company's needs while respecting the privacy of your employees

  • Have a clear policy outlining the justification for surveillance
  • Ensure employess understand why they are being tracked
  • Obtain consent from your employees if you are installing employee surveilance programs in their computers and phones. Note that if it is a requirement to work, it is not freely given.
  • Ensure tracking stops after working hours.
  • Hire a professional such as Privacy Test Driver to ensure you comply with relevant privacy laws and provide an environment that fosters productivity while protecting both your company and its employees.

Monday, October 31, 2022

Unintentionally helping others steal your biometric data

The pieces of the puzzle

  1. Let's start by stating the obvious: people do upload a lot of videos and images to social media showing their family vacations, new dance movies, and, yes, twerking. These files are publicly available and can be easily gathered. Do you remember the old warning about being very careful about what you share on the internet? The security and privacy concerns were about showing where you live, who are your family members, and when you will be out of your house. Thanks to advancements in AI we can add a new reason to slow down posting so much about ourselves.
  2. Biometric-based authentication is the process of authenticating people based on something you are, i.e. an unique pyshical feature -- fingerprint, iris, or retina to name a few -- instead of something you know (password) or have (token). Some of the applications are multifactor authentication and face recognition, which are used to unlock smart phones and identify people in a crowd.
  3. Deepfake is an evolution of the tradition of inserting (or removing) people in pictures and videos using cropping and blue screens. Benign results have been seen in movies like Zelig and Forrest Gump; George Orwell' 1984 talks about using that for malign purpose, namely rewrite history. The difference is that thanks to AI, deepfake is automated to the point it runs in real time. The classical example of the potential of this technology is a Tom Cruise deepfake video created by Belgian visual effects artist Chris Ume:

    It did not take long for malicious individuals to apply deepfake to create celebrity porn videos, fake news, hoaxes, and financial fraud. What about the average people? They are not famous politician, singer, or athlete; can they shrug it off saying "this does not affect me; I am too small of a target for them to have an interest on" like they have done many times before, or should they be worried? The reality is that

    • Attackers are always looking for opportunities, and will strike at the low hanging fruit.
    • The cost of the resources required to deepfake has dropped a lot in the last few years.

Let's have some fun

How can we combine that? In 2007 (yes, time flies), Microsoft identified the following as the most popular types of biometric authentication devices of the time:

  • Fingerprint scanners
  • Facial pattern recognition devices
  • Hand geometry recognition devices
  • Iris scan identification devices
  • Retinal scan identification devices
Nowadays we can do all of that using just a camera. Let's consider a few applications that are possible today:
  • Videos and pictures collected from your social media provide enough info about your face to unlock your phone.
  • Inserting you in the CCTV records of a riot is just a matter of being able to access said records and change them. Only limiting factor here is bypassing tampering detection, which is not as common as you are led to believe. Yes, we are not at the Ghost In The Shell level, where video streams were being tampered in real time at the camera level, but there is enough knowledge to make some damage right now.
  • Back to those high quality videos found in social media, they are (not may be) good enough to collect your fingeprints or ear shape. The later has been successfully used to identify people in riots while wearing masks.
  • Saving the best for last, imagine someone using deepfake, after collecting your videos for images and voice samples, to have a webconference with your children's school or doctor. I will leave to your imagination to ponder on the consequences of that. Before you say anything, the Tom Cruise video I mentioned early is now old from Moore's Law's point of view.
We could go over an example of how to do that, but that is not the point of this article. If you thought your identity and, as a result, your privacy was at risk, I think we reached a whole new level.

What can be done to minimize exposing biometric data?

Think before posting! This rule has not changed. There are some who argue that millenials and Gen Z crowd are the biggest offenders, but this is just a matter of training. If you have to post, be mindful of what is being exposed. Or, cut down the quality of the pictures a bit so the bad guys do not have a nice clean image to start with. For the images and videos you already posted, once it is out in the internet, there is no coming back.

Make protecting your privacy a priority in your life. If people are going to steal your data, make them work for it.

Further reading

Trendmicro published a great paper on the risks of exposed biometric data.

Friday, September 30, 2022

Optus and how to DevOps badly in a few easy steps

Full disclosure: I put Optus on the title because those LinkedIn articles avocate the need for clickbait to attact viewers. Problem it, I am actually going to talk about this company. But, this article is really about code development gone bad; Optus just happens to be the perfect example of, in the words of Jeremy Clarkson, what could possibly go wrong.

We are Agile!

In earlier, simpler times, the recommended software development lifecycle model (SDLC for you acronym addicts) was the Waterfall Model. There are many places which describe it better than I could ever do, so suffice to say that it is linear and starts with the idea, then goes to the design, and then a few steps including coding and testing until it is deployed and goes to the maintenance mode. In other words, you start with an idea and then ends up with a product.

Making the code secure, or implementing (Buzzword time!) privacy by design was fairly easy if the security and privacy team was involved fromt he get going, as that was just another well-defined step.

But, what if the product needs to be changed? As in not just a patch but feature request or something that requires a new library or user interface redesign. You need to go back to the start.

You can say it is a bit rigid, and many people agreed with you. Next step was modifying the model so you could hop back one step or two, and that started to get messy. The bottom line is it does not take changes well. In many fields that is completly fine. However, for code which is always changing and put into production as soon as changes are done, like in a website, it can slow down delivering a working product. In some industries, who puts it out first, even if it is not perfect, wins. So, we need something better.

We evolved into the Agile model, which. as a friend taught me, is also called the "Never Finished Model." What the joke implies is that this model is designed to handle changes quickly and deliver a working product even if it is not perfect. The reason is that you can improve on it later once you have some feedback from customers.

The following picture shows a typical Continuous Integration/Continuous Deployment (CI/CD) pipeline, which is a trademark of using the Agile model in code development. How do we account for security and privacy here? DevSecOps places security controls in the CI/CD process of DevOps. Note the two red boxes: they are the points where we add security testing to the cycle. one is for the Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST). The red arrows indicate that funny business they find is then send to something which then logs and reports them by creating tickets or sending emails or something else. This is of course, ideally supposed to be done in conjunction with training developers in secure coding, (Buzzword Alert!) pivacy by design, and whatnot.

In reality, some companies/developers which should know better decide that slows them down and hampers their style. In other words, they nee to be putting new code out with new features, and privacy and security are not features but

Enter the Optus

Singtel Optus Pty Limited, a.k.a. Optus is the second largest wireless carrier in Australia. In the last week of September 2022, Optus reported that on 22 September 2022 it was victim of a very sophisticated cyberattack by members of a criminal or state-sponsored organization. This attack resulted in a major personal data breach, where the names, dates of birth, phone numbers, email addresses, street addresses, drivers licences, and passport numbers of both current and former customers was leaked. Optus chief executive Kelly Bayer Rosmarin said that they "are not aware of customers having suffered any harm."

Insert here the videos of a guy in a hoodie in a dark room and computer screens showing random Linux output.

What does this very sophisticated cyberattack have to do with coding?

Glad you asked.

You see, later on it was found Optus had an unauthenticated API, http://api.www.optus.com.au, that released all of the personal data it stored, not only of current but also previous customers (there is the case of someone who has not been an Opus customer for the last 14 years and not only received an email from them about the breach but also started to be flooded with spam). We are talking about data from 10 million people. Unencrypted.

Optus detected the event when the attacker started hitting the AIP hard.

So, the questions are

  1. Why did it have an exposed API without some kind of authentication? Perhaps that was originally done to allow testing of the API more convenient by developers. I myself have seen that in the wild. When developers/DevOps from the environment in question were asked to at least limit access to a network only reacheable from behind their firewall, they shrugged it off saying the VPN (which is not a solution but sure is an improvement) was too cumbersome to use from their personal laptops.
  2. Why was the connection to said exposed API unencrypted? Do you remember when we said that DevSecOps places security controls in the CI/CD process? That probably would have caught that: the SAST would have noticed the unencrypted connections in the code; the ones I have used before would bark at unencrypted traffic (and hardcoded passwords, which was not the case here since no passwords were used). In the real world that does not happen as much as people believe. In fact, it is too common to hear that devsecops slows down the of devops' work.
  3. Why was the personal data stored unencrypted? Once again, convenience. Maybe when that was recommended, it was then turned down because developers argued it would slow the response time of the system. Once again, SAST would have caught that.

Clearly there were poor security practices at play here. Perhaps DevOps security and privacy training never happened, or SAST/DAST was never implemented in the SDLC chain. Usually that happens because they are considered cost centers in business that, as mentioned earlier, slow down progress. Remember we mentioned that automated security testing will create tickets developers will have to deal with in addition to the other tickets they already have on their plates.

Post Morten

Don't be that guy!

  • Privacy by design would not have allowed this kind of code to even make into the repo.
  • Encrypt the traffic to the API, period. Ideally that should be done at the API level. I know some people will put a Nginx proxy in front of the unencrypted API (using kubernetes or docker), and I cringe about that: it is an improvement from the Optus setup but not by much.
  • Encrypt your data at rest. Yes, that is specially important for personal data, but it is a good habit regardless.
  • All connections to an API should be authenticated by default. If you have a query, say list status, you want to make available unauthorized to users, spend some serious time thinking on the consequences.
  • Ensure your CI/CD process has proper security controls. If DevOps is being swamped with the tickets generated by these controls, this may either mean they need more security and private training, or the controls need better tuning, or the external code/libraries you rely on are not as well written as they should. That is how BadUSB and many of the IoT issues came into being.

Monday, September 26, 2022

Phishing Is Too Easy - 3

Last week I received another traditional phishing email; apologies for the lack of images because my email account is setup not to load externally attached pictures. Here it is, with my address removed:

Phishign email disguised as an invoice with attached PDF pretending to come from Norton

Yes, this is pretty much a variation of the last one I commented on months ago, namely:

  • It is an invoice for some product, in this case it implies to be some kind of Norton product.
  • It creates a veil of credibility by alluding itself (blue box) in a rather half-ass way to be related a real company. Note it claims to be "Norton Support LLC," which I have no idea who it may be. Since the average person probably heard of Norton, who sells an antivirus and other security products, it is easy for said person to associate both.
  • Still on the credibility standpoint, the sender address is supposedly from quickbooks (I did not bother to check the header). Yes, a large company right Norton would not be using quickbooks to send its bills. However, if you have to deal with purchasing you probably have seen invoices from smaller business which use the online quickbooks site; when they send their invoices, their invoices will have "<quickbooks@notification.intuit.com>" as the email. But, we hope they will look more like "Something Of Doom LLC <quickbooks@notification.intuit.com>" instead of "Intuit E-Commerce Service <quickbooks@notification.intuit.com>"; I think the later is not the default value, but it sounds credible enough.
  • To create the urgency, the invoice is for $800. That will make someone's heart beat a bit faster and immediately want to open the attached PDF file (red box) to find what this invoice is all about. This is a bit lazier than the last phishing email we posted about as some mail services will disable attachments with macros in hope to block malicious payloads. However, most of the mail services do not do that; mine could not be bothered and told me if I want to see it, and be properly infected, I need to have Adobe Acrobat Reader (green box). Since my mail service does not automagically open anything, I have some extra time to read the email and decide what I want to do next.
  • It provides a number which may be tied to the phisher (VoIP?) so if the frantic recipient of the email calls, the phisher (we called him Peggy in the last phishing post) can then social engineer his way into the victim's computer.
  • The return address is a typical quasi-randomly created Gmail one; they could not be bothered with making it sound like it came from a billing department as it claims to be.

How effective it is? I think it depends on where people will focus on. The phishers hope their marks will see the value of the invoice -- $800 -- and immediately open the pdf to find out what is going on. The best thing to do here is stop -- but not stop/drop/roll as you are not on fire -- whenever you see something suspicious, specially when it claims to be urgent. Then ask yourself if you expected an invoice from Norton. Then look at the email addresses and see if they are not overly suspicious.

Remember: phishers are lazy, and they hope you are equally lazy!

Saturday, September 17, 2022

There and back again: DEFCON 30

Second slide in the workshop reminds the audience we had put instructions on github for what to do before attending the event.

No, I did not postpone posting about my trip to DEFCON30 until now because I did not have anything to post this month. The truth is I was slacking. There, I said it.

This will be a bit of a post morten of our workshop. Will this post have any useful info? Don't hold your breath; what I can promise is there will be many opportunities to laugh at our expense.

The Plan

For those who read the announcement for our workshop at the Crypto and Privacy Village, you know that there are two authors -- Matt and yours truly -- who put together the mess without killing each other; the fact we had half a continent between us probably helped.

Originally, the plan was to start with an explanation of why this phone privacy thing was so important and then get show how to do it. Ideally people would have read the announcement, followed our instructions, and show up with a phone ready to be configured. While one of us would be on the podium, the other would then be helping the audience.

After we had the entire workshop done and did a few dry runs, we started thinking: how many people will bring a phone that meets the requirements? Probably not many -- not many people have spare phones that can take CalyxOS or LineageOS in their kitchen drawer -- and we will not be able to bring enough loaners as all the resources in the workshop are coming out of our own pockets. We could just shrug it off and tell people "Hey you did not bring a phone, so we will bore you with screenshots."

Thing is, we had taken a lot of screenshots of everything we would be showing in the phone, in case we would not be able to share the phone screen or point a camera at it. So, this was an option but we felt that would detract from the workshop; instead of being something interactive it would be no better than watching a video.

We needed a plan B.

What if we provided an emulator? It will not do everything a real phone can but it will allow the audience to follow along on their laptops. Since we were going to focus on CalyxOS (we had only an hour to run the entire workshop; compromises had to be made), we then decided to create that image, make it available somewhere, and then update the wiki with instructions on how to use it. We also asked the Crypto and Privacy Village (CPV) people to add a single line in the workshop announcement, indicated with a green line in the picture below, to tell people they should install Android Studio in their laptop.

Wrokshop announcement, with the line 'Alternatively, a laptop with Android Studio installed' added to it, indicating you may want to install it if you do not have a phone to use in the hands-on bit

The plan was to have everything finished two weeks before the event and then take the last week to practice, and ensure we had a reliable way to hand out the emulator images.

Things did not happen according to the plan.

Matt was able to go to DEFCON from the beginning of the event; I do not know if he also was able to stop by BSidesLV. I, on the other hand, was a bit more time constrained: I flew the first flight on Friday and was going to return on Saturday after the workshop. In any case, we were going to try to attend as many events and talks as possible, and meet up with people we have not seen in ages. I also planned on volunteer to the CPV.

What really happened?

  1. Building the CalyxOS phone image was not as smooth as we hope for. In plain English, I could not make it work. I had no issues building LineageOS ones in my docker build environment -- if someone reminds me I can post instructions on how to do that later -- but CalyxOS was fighting me all the way. Fortunately we were working in parallel and Matt was able to make it work.

    I will let Matt post how to create the CalyxOS image with all the apps already installed in his blog, as he is the one that made it work. In fact, it worked so well, he used that instead of a real phone during the hands-on part of the workshop.

  2. We spent too much time trying to come up with a clever way to deploy the phone image. After days of frustration we came up with a simpler way to do that, wrote the docs that worked whether you had a Linux, Mac, or Windows laptop, and put it with the image.
  3. The emulator stopped working. I do not know why but it went on strike. More frustration ensued. Was it the emulator itself or the image? Once again Matt rose to the occasion and made it work.
  4. We also found out it would take too long to download the image we built using the DEFCON public network. Fortunately we had a bunch of USB drives and decided to put in each, formatted in some Windows file system so all 3 OS could mount them, the image and instructions.

There are probably more things that went wrong, but I cannot think of them right now. Bottom line is we spent most of the time that week working on these bugs. And, we made it work.

Showtime

The CPV people did a great job. Everything was working smoothly on their side. I did most of the overview and then Matt took over for the technical part:

Matt Nash presenting the hands-on part of the workshop. Audience is spaced out following the social distancing requirements

You will note on the above picture the audience (picture was taken from the back out of respect) has set some chairs apart for social distancing's sake. I then came back from the podium sporting one of my favourite shirts (bonus points if you recognize it) with the final comments and we then took questions. After it ended, Matt was surrounded on the podium with members of the audience for a long while until the Defcon Goons kicked us out.

Mauricio Tavares on the podium spreading lies and misinformation while sporting the classic Oregon Trail shirt.

Thank you for all the fish

  • Avi Zajac and the rest of the Crypto and Privacy Village crew for not only having us there but making the event possible. And the badge. And the shirt (I am afraid of wearing it out because it is nice). And keeping the Goons at bay. And the sticker!
  • The NCC Group for mentioning us in its August announcement.
  • DEFCON for, well, being defcon. I do with I had more time to see it all this year instead of being in a hotel room trying to get all working. But, it was all worth in the end.
  • CalyxOS for trying to make a more secure and private Android distro easier to install. There is more around this line item, but I am getting ahead of myself.

Wednesday, August 31, 2022

Good Cookies, Bad Cookies, and Privacy

Cookies "banners" are a particular pet peeve of me. As in don't get me started or I will be on it for hours if not days on end. So, I will struggle a bit to get this short enough so not to kill any reader of boredom. I am not claiming I will accomplish this goal, so you have been warned.

I should also warn this article has been in the making for months; I collected a lot of real samples I need to cover the names of the companies to protect the guilty. If you recognize the site by looking at the cookie policy form, smirk and keep it to yourself.

So, are cookies bad?

That is an oversimplified question. Cookies are used to track what users are doing in a website, and that may mean storing some personal data no t only of site users but also visitors. Some of which have very valid and important applications, like ensuring users can authenticate and are the right people to access a given resource, like their bank accounts, or repository of cat videos. Then we have the ones companies are interested on, such as:

  • Which pages users go to in a given website, links they have clicked, and how long they spent on a given page. That may help them figure out which content -- primarily cat videos -- their audience seek and which ones they are avoiding. Or find out whether a given page is too convoluted, causing visitors to spend too much time and frustration in them. I can see why anyone wants to provide a website that does not suck.
  • How often they visit a website whose cookie is in their computers.
  • Which products or keywords they search for. This may tell the product lines the websites need to be providing and which ones may be taken down.
  • Geolocation and IP address. A business case is to know where its customers are coming from so they can identify markets they are not covering, and then find out why.
  • Username/password, and even address. Do not ask me why someone thought it was a clever idea to have them in cookies so forms would be conveniently filled, but they are there in the wild.

None of these are really needed to provide a service to users, so GDPR would say you must ask the visitors if they give you consent (Articles 6, 7, and Recital 32) to collect said data, and provide a way for them to remove their consent. CCPA and CPRA are less restrictive, having a set of thresholds (selling personal information of more than 50,000 Californian households, or making more than half of its annual revenue selling that data) before they are applicable and providing a the get-out-of-jail-free card (Art.9(2),e).

Some of these cookies are collected by the company running the website (first-party cookies) and others by whatever add-on they have deplopyed (third-party cookies). Google Analytics is an example of an app that creates the later; we have talked about how nicely it plays with GDPR before. However, that does not necessarily make first-party cookies better for security; but that is the topic of another article.

From a security standpoint, criminals will try to steal -- phishing emails are a popular way to deploy malware to achieve the goal -- cookies to impersonate users. So, a sensible business minimizes how much data it stores in its cookies.

The Good

  • Let's start with a nice bright example of someone who respects the privacy of its website visitors.
    It is written in plain language, gives a quick blurb on what it is being used for, and allows the user the choice to accept all the cookies, deny all of them, or do something in between (which leads to a more itemized list you can enable item by item).
  • The next one, from one of the European Union's official websites, is not as nice but at least they are trying.
    Why am I not impressed with their banner? Because it is an all-or-nothing, without a proper explanation, and mentions these "essential cookies" (is this like "essential oils?") without explaining them. Yes, if you click the link explaining how they use the cookies you realize they are not out to suck you dry of your private info, which is why it is listed here. But, I think they could do a better job given the resources they have.

The Bad

This list is but a tiny sample of my fun collection. Still, get the popcorn.

  • First we will start with one that is on the slippery slope as far as GDPR is concerned. It mentions collected data with "trusted third parties." Who are they? Google Analytics? We have talked before that you can no longer use it on a site that is accessed by European residents.
  • We really should just get serious and look at an example of conning the user. For convenience, I highlighted the relevant wording in their privacy note.
    First we have "This information might be about you" (red), which uses the "might" word to imply that it is ok because maybe the information is really not about you. Well, knowing your IP (considered by GDPR personal data), OS, browser, and other facts that we will not go over here (username?) suffice to uniquely identify you. If you use the same computer without bothering to run VPN later, they will know you are back... specially if from home as your external/public IP rarely changes if at all. But then they smother your worries claiming that "the information does not usually directly identify you" (blue). It is personal data already, sunshine.
  • Here is one from a bank that prides itself to have branches in many countries across the world.
    At first I thought the following cookie banner was just for the American market, but when connecting from Japan and Europe I still was "welcomed" by the very same banner; I do not need to say what that means. I have a ton of other examples following the same pattern, but I think we only need one to get the idea.
  • This one is a variation of the bank banner we saw earlier seen in the website of a professional society. I would not have posted it if it did not have one single word: consent.

    I must assume the reason this specific term was used is because of the language in GDPR, specifically article 7 states that if you do not have a legal reason to collect personal data, you must obtain consent from the user, who must freely give it. They seem to beleive that by having the word "consent" in the banner, they satisfied this GDPR article. However, if the only option is to surrender your private data, this consent is not freely given. Or can be easily revoked.

    "But," one can argue, "you did not consider they are probably an American-based society which does not cross the CCPA requirements by keeping the number of Californian households under the limit." How would that work? Geolocating may be hard: one of the VPN services I use has servers in California; there might be other services with servers somewhere else in the US being used by Californian citizens. Given the banner you are seeing, how would you distiguish the two cases? And besides, if this is an international (they hope they are, as one of the letters in their name stands for that) professional society, GDPR, LGDP, and APPI just to name a few are bound to be triggered. I did my Westen Europe test, and it did not switch to a GDPR-compliant cookie banner.

The Sleazy

Now we get to the really special ones, the ones that decided laughing at the privacy rights of individuals was not enough; they had to make a point.

  • First jewel is what I call a BannerWall: you cannot use the website until you click on the only option ("Accept"), so site owners can then say "here! User consented to use collecting all personal info. We have the log showing the Accept button was clicked!" Hopefully you do not need to use this site, so you can just close your browser and find some other place with similar information but more privacy conscious.
    Looking at the screen capture, do you know if "Privacy Policy" and "Terms of Service" are links? No? You are not alone. Can you say hiding in plain sight?
  • But, what if you have to use the website? For instance, what if you need to log into the site to pay your utilities or rent, and they do not offer another way (mail or in person) to make said payment? Can you say coercion?

Don't Be That Guy

  • Instead of having you site collect personal data based on the location of the site visitor, assume they are all coming from the EU and build it for that, as it is one of the more restrictive ones. Make your life easier, be your website a commercial or educational/research one; we covered that a while ago.
  • What is wrong with asking users if it is ok to collect their data and tell them how you are going to use it without vague words? And by that, ask properly, not like the no-real-option seen in some of the examples above.
  • Document everything, logs included, because the world is changing and you may be audited or even fined for non-compliance. Remember, you do not need to have suffered a personal data breach before a GDPR Data Protection Authority takes legal action against you. Don't believe me? We commented on some cases earlier this year. All is needed to get that avalance running is for someone to file a complaint.

Wednesday, August 24, 2022

Measuring company reputation

One of the bullet buttons in the (ISC)2 Security Domain 1 (security and risk management) is risk analysis (yes, you with the beard on the back row, that would be under NIST 800-53r5 Security Domain 14). There are many ways to define it but I will be lazy and steal the defition of it from NIST 800-160 because it is short and to the point:

Risk Analysis is the process to comprehend the nature of risk and to determine the level of risk.

We can subdivide this analysis into two groups based on the criteria we use in the decision process: quantitative and qualitative analysis. Without going over the details, the bottom line is a lot of people ignore qualitative analysis because it does not directly tie into money: how can you ask for fundings to executives if you cannot provide a proper cost-benefit analysis? For instance, if you are asked to measure and tie to the yearly budget, say, your company reputation (a topic picked out of blue which has absolutely nothing to do with the title of this article), what would you do? After all, this is the typical topic qualitative risk analysis is built for.

The answer is we can quatify it if we look at it in a non-direct way. If you think about it, company reputation can be "itemized" by the things that affect it:

  • Your cyber insurance, which is affected by how the insurers think you are about protecting your assets. So you can say "since we have not been breached in X years and we have great security policy which is enforced and audited, our insurance is lower than from our competitors." Can you see how close this narrative now is to that associated with the Annualized Loss Expectancy (ALE)? You may be able to ask the insurers to explain how a recent loss of personal data will affect the premium. There is no guarantees they will talk, but there is a compelling argument to work together to decrease their risk.
  • Customer confidence, which is affected by how many data breaches you had, how you handled them, and how you deal with the customer's data. This can be estimated by investigating the decrease of sales of other companies due to loss of personal data including credit card info. People vote with their wallets, and their letters to elected officials.
  • Your suppliers confidence on you, which leads to whether they will provide you with discounts, less interest, and longer times to pay your orders. If they do not trust you, they may say any bill is due on receipt. That affects cashflow in a very definite way.
Each of these in the end of the day affects the bottom line ($), which is what matters to upper management.

Sunday, July 31, 2022

Phone Privacy at DEFCON 30!

So our workshop on smart (I will keep a straight face here, just saying) phone privacy was accepted by the Crypto and Privacy village at DEFCON 30. If you are there, we will be presenting it on Saturday Aug 13th. As it will be only one hour, we strongly recommend to first to folow the instructions in the co-author's github-based wiki; this link is also in the official DEFCON accouncement, but it is so important we would rather mention it a few times.

So, what is it all about?

Short version: how to make your smart phone more private and why you should care. I could elaborate on that, but this post is not about the contents of the workshop: go watch it and find out!

Anything useful you want to tell us?

People have told me I have some kind of fixation with bullet points; let's not disappoint them, shall we?

  • No pictures will be taken with my phone; I will be bringing a camera -- ancient but trusty Canon ELF -- to take some pictures of the event. Yes, compared to modern smart phones its resolution is pathetic. But, it has a real zoom, using real lenses, has no understanding of wireless file transfer (great during DEFCON), and does not keep you up at night when vendor stopped creating patches for it. As this will be a real camera, not smart phone, they will not be posted in real time.
  • I was comparing our abstract with the other presenters' and realized ours is gigantic by comparison! This is not a size competition, and I realized it may e nd up being a bit of a turnoff. But, there is some logic behind the madness: we really wanted to make sure people knew what to expect and that they need to prep are for the workshop. Which leads to...
  • The "talk" part of this workshop will be rather short because the main dish is the hands-on part.
  • If you to get your hands dirty, bring an Android phone. It's two main requirements are
    • A phone you are fine if it is bricked. That can happen. And, you can find out if it does brick before attending the event because we put the setup instructions in the wiki.
    • Ideally, you want to have a phone such as Google Pixel (3 and above), OnePlus, or Fairphone. Main reason is because a lot of Android phones have a closed source "blob" of code that is only updated for a brief period of time (a year? A week?), until not longer after replacement hit the shelves. However, we are not saying "for best experience you should have bought the latest $1000 phone" (bonus point if you know where I took that from). We do think everyone should be able to strive for a private focused phone (sounds like a tag line for a product, eh?). In fact, we will have a Pixel 4 to show things, but a Pixel 3 will work just fine and can be found for around $50 if you look hard enough. When I checked this morning, an used Pixel 4 was hoving around $100.
    • FYI, I have issues with the Google Pixel phones, primarily how hard it is to repair it.
  • I would love if we could make the phone fully private from a GDPR (we tend to mention it a lot in this blog?) standpoint, but that won't happen. Compounding that, some countries do not take your efforts to protect your privacy in your phone very kindly.
  • I really would like to thank the Crypto and Privacy village for having us. This may sound the typical fake message you associate with Facebook and LinkedIn, but for a change it is real. One of the hints is that I am not starting this thread with "I am excited that;" the truth is that we have been working hard and long hours on this and the CPV crowd have put up with all of our stupid questions and rewrites and whatnots. And have not tried to strangle us!

Dude, I have an iPhone! What should I do?

Dude, I have no clue; I do not have an iPhone to research on!

Saturday, July 30, 2022

The private life of a privacy screen

Let's say you have a laptop which you take to libraries, coffee places, and other public locations to get fresh air and inspiration while you write away a new article or piece of code. How do you keep what you are doing to yourself?

You on the corner who said "VPN" (when you think aloud, you do think aloud), you are right. That helps with the network connection. But what about keeping the prying and curious eyes off other customers of the same establishment you are in? Yes, this time the answer is the privacy screen, which has not only been around for decades but also is the name of this post.

How good is a privacy screen

Some are really useless. I remember when I was in college one that was so bad the person using the computer could barely see what she was doing. It was just a step above bolting a steel plate to the front of the monitor; I guess if you the user cannot see what you are done, the same happens to the potential attacker, who then has to rely on keylogging and scanning the screen contents using software.

Others work well enough to be useful within some limitations. Case in point is the one I will be test driving today. It's brand is... well, I have no idea. I found it besides the trash can in an office once. It is one of the common polarized ones and had no scratches nor too many fingerprints on its surface. As it was larger than the (old) laptop monitor I wanted to use, I grabbed it. And then cut it to size and secured it using Scotch tape (I am calling the brand out here because that is the roll I have).

It is one of those garden-variety polazided screens, which blocks the light if you move too far from being perpendicular to it. How far must you move from looking straight at it before the privacy part of the privacy screen is "engaged"? It depends on the make. Let see how it works by simulating the kind of situation that can happen anywhere.

  • Here is a picture of it installed in the test laptop, which is currently setup to replicate that of Mort Villanous, an aspiring supervillain who is in some public library writing his current world domination plot. In fact, this would be the point of view of our evildoer in-the-making. Note the tape on the corners of the privacy screen.

    From his point of view, he can clearly see the screen and, as a result, work on his important and secret document. The eagle-eyed members of the audience may have seen my exclusive and expensive camera cover; I will try to provide a link to it later on. But if you have to ask how much, you can't afford it.

  • Next let's pretend we are the Tom Goodfellow, secret agent tasked to observe what villanous things our villanous villain, Villanous, is up to. Wearing his trademark 30 gallon white hat, chaps, and 7 Gold Chains or Virtue, he discretely approaches Mort from the right, this is what Tom sees.
    From his current point of view, the laptop looks as if it is turned off, as the surrounding background is reflected on its back screen. That won't do.
  • Knowing Mort has not noticed him yet, Tom heroically slides a bit closer to the aspiring villain. This time the privacy screen proves no match to the hero's eyes, as at this angle it exposes a hint of an evil deed in the making, namely a document is open and being worked on: he can see there are words written using different font sizes, but he still can't read them. These clues tell Tom he is dealing with a polarized privacy screen!
  • Embolded with confidence and knowledge of how this kind of screen works, our hero inches even closer to the villain. And he is rewarded with being able to finally begin read the contents of the document!
    Unfortunately, the secret agent made the typical hero's mistake. Being a bit myoptic, he leaned too much towards the computer. As a result Mort Villanous not only heard the gently clanking of the secret agent's gold chain as it touched the table, but also felt it crushing his arm. Aware now of the presence of his enemy, Mort immediately closed the laptop, shouted "do you mind?" ignoring proper library ethiquette, and walked away.

Moral of the Story

Whether you are plotting to rule the world, or just trying to read email in peace at a public location, getting a privacy screen is not a bad idea. However, test it first to see how large is its "non-private" region so you can plan where you will be seating and what will be behind you.