Changes to WhatsApp and what privacy means for businesses

It was hard to miss the fact that WhatsApp announced changes to their Terms of Use and Privacy Policy recently. Just about every news outlet carried an alarming headline warning that your private information and (depending on who you ask) your messages were about to be shared with WhatsApp’s parent company, Facebook (maybe you’ve heard of it?).

Much of the immediate concern is potentially unfounded – WhatsApp hasn’t said they’re going to share your messages with Facebook, nor have they dropped their commitment to the so-called ‘end-to-end encryption’ of those messages. However, there are several valid concerns with these new changes, particularly when they’re considered as a part of the overall data privacy landscape.

In this post, I’ll examine some of these concerns, as well as the larger landscape and what, if anything, this means for businesses.

What were the changes?

The detail of the changes is a lot smaller than some media outlets would have you think. WhatsApp’s own summary of changes state that the changes cover three areas:

  • Additional Information On How We Handle Your Data
  • Better Communication With Businesses
  • Making It Easier To Connect

What’s more, those of us lucky enough to be in WhatsApp’s ‘EEA region’ only get the first two updates – the third, and most troubling for many, doesn’t apply to the EEA.

Should users be concerned?

In the wake of the popular reaction, WhatsApp has scrambled to remind us of their commitment to the encryption of our messages and posted to Twitter an infographic stating everything they don’t do. However, I do think there is cause for concern for any user conscious of their privacy and security online, for several reasons.

Firstly, while WhatsApp’s messages and calls have been end-to-end encrypted (meaning that only you and the recipient of a message have a readable copy of that message) since 2016, we only have the company’s word that this is the case. Personally, I would like to believe this – I’m a big fan of the ‘Signal protocol’ which they implemented in 2016 – but there are multiple accounts online from people who would swear blind that they’ve received ads or friend suggestions on Facebook as a result of discussions they had on WhatsApp.

Secondly is the question of what Whatsapp can and cannot do (versus what they do and do not).

In 2018, as GDPR came into force across the EU, Facebook moved all non-European users away from their existing jurisdiction in Ireland, to the jurisdiction of their Californian entity, creating disparate sets of terms. This should sound familiar – it’s very similar to WhatsApp’s dual-region approach, above. The company insisted at the time that they would continue to apply the same privacy protections everywhere despite this change, but doing so allows them the space to treat the two sets of users differently. As of December 2020, Facebook stated that they were moving responsibility for all UK users to their Californian entity, in response to Brexit. It’s not hard to see why they’d want to do this; UK law will continue to track GDPR for the immediate future, but making this change allows Facebook to easily exempt users from the technical GDPR controls they have in place, and be ready to exploit any change (or loophole) in the UK legislation.

Alongside this, Facebook publicly stated in 2019 that they’re working to unify the ecosystems of WhatsApp, Facebook Messenger, and Instagram. The iOS Appstore privacy labels show us that the self-stated data that each of these apps collects is very different, so how long WhatsApp’s list will remain as short as it remains to be seen.

https _specials-images.forbesimg.com_imageserve_5ffd7c23feb0f99b9b594e57_960x0.jpg?fit=scale.jpg

Facebook’s track record

It’s all well and good insisting that messages remain secret, that UK users will stay under the EEA region terms, and that WhatsApp (or Facebook) care about your privacy. However, Facebook’s track record in such matters isn’t pretty.

In 2018, the introduction of GDPR meant that Facebook had to collect updated consent from a huge number of its users. TechCrunch reported on this process at the time, and found multiple ways in which it was clear that user safety and privacy weren’t exactly Facebook’s number one concern, in a ‘following the letter not the spirit’ approach to GDPR:

'with a design that encourages rapidly hitting the "Agree" button, a lack of granular controls, a laughably cheatable parental consent request for teens and an aesthetic overhaul of Download Your Information that doesn’t make it any easier to switch social networks, Facebook shows it’s still hungry for your data.'

Facebook and its CEO Mark Zuckerberg have not always been entirely honest when it comes to talking about how they handle users’ data. In 2019, TechCrunch reported that Facebook had admitted to a US Senator that 'reporting around this project was not entirely accurate' when they had responded to a media furore over their reported collection of data relating to thousands of teenaged users;

'At the time we ended the Facebook Research App on Apple’s iOS platform, less than 5% of the people sharing data with us through this program were teens. The analysis shows that the number is about 18% when you look at the complete lifetime of the program and also add people who had become inactive and uninstalled the app.'

In 2018, the Guardian ran a lengthy and fairly blunt opinion piece, slamming Facebook for their consistent disregard for the privacy of its users. They argued that Facebook has a vested interest in continuing to erode user privacy for the purpose of continuing to grow its platform and to spread the liability for that erosion far and wide before anyone takes notice.

In researching this post, I came across an example of Facebook’s so-called ‘privacy settings’ being downright inaccurate; an interesting Medium post detailing a way to discover a Facebook user’s name and profile image, by using their email address or phone number – even when their account settings supposedly prohibit people from doing so. When challenged on this, Facebook responded that the 'attack' was in fact intended functionality and that they had no intention of altering the behaviour.

Every individual has to draw their own conclusions and decide their own level of comfort. After all, what’s the point in 'quitting WhatsApp' if you’re going to continue to put all of your personal data on Facebook or Instagram? With the examples above in mind, I personally question whether Facebook – and WhatsApp and Instagram – can really be trusted with my personal data, and to do what they say with it.

When it’s free, you are the product

Fundamentally, the storm over WhatsApp’s changes and the examples given above are part of a larger conversation. It’s become clear over the last few years that we, as users, have to take responsibility for our own data privacy, and how much exposure we’re willing to risk. Services like Facebook, Gmail, and many others offer us fantastic products and services in exchange for the data we provide. Let the Waze GPS app (owned by Google) know the location of your car and your destination, and you’ll get real-time traffic information and accident reports garnered from other Waze users doing the same thing. Give Spotify your email address and all of your music tastes, and it will recommend other music you might like and help you stay up to date with the latest releases. Let Google Photos scan your images and perform facial recognition on them, and you get a well-organised photo album, backed up online, searchable by location, time, and even the person or objects in the photos.

As long ago as 2010, Bruce Schneier stated at RSA Europe;

'Don’t make the mistake of thinking you’re Facebook’s customer, you’re not – you’re the product. Its customers are the advertisers.'

In giving away some data, we get convenience or utility. This tradeoff is present in virtually every 'free' product we use online today – but users of such services should assess how comfortable they are with the particular trade-off being made. Businesses must make money, and we should consider the business model of a service that we choose to use. It may be that you feel you have nothing to hide, but there are plenty of examples of freely-available data being used for purposes other than those the users intended:

As users, we should consider the business model of a service we choose to give data to. Would you rather give your contacts and private messages to WhatsApp, who are ad-supported and owned by Facebook, or Signal, the not-for-profit foundation funded by donations? Or perhaps, you’d prefer Threema, another privacy-focused messenger which charges a small fee for the app and is funded by this and their ‘for business’ service. With emails, should you choose Google Mail, hosted by the largest online advertising company in the world, Microsoft Outlook, hosted by a company who make their money selling games consoles, cloud services and office software, or something like ProtonMail, who provide both free and paid plans and bet on users caring enough about privacy to upgrade to the paid versions? We can only make our own decisions on these questions, and check periodically if we’re still happy with the state of our chosen provider.

What does this mean for business?

How does where your employees choose to share their data impact your business? In most cases, probably not a lot. However, it’s worth considering the potential impact on hackers’ ability to conduct email phishing or social engineering attacks on you and your employees, as well as thinking about the impact of data privacy compromises on an individual’s wellbeing outside of work.

If you want to do something about it, the obvious answer is to include guidance in corporate policies – whether it’s encouraging good data hygiene in employees’ personal lives regarding password management or providing lists of approved and non-approved software and services. Employers have a responsibility to look after their employees at work, but when many businesses take up the mantle of including personal wellbeing, perhaps we should consider support for data privacy issues (and the potential fallout when things go wrong).

If you do decide to mandate or ban the use of certain services, be sure that you do provide or recommend a sufficient alternative. At Play, we recently trialled a switch from Slack to Discord, but soon realised that although the service provided was great for end-users, the management and compliance controls weren’t up-to-scratch for business content. I still hear people saying 'Discord was better', but this all too often masks the larger conversation: is what we have good enough, and if not, why not?

Foremost, I believe the best thing employers can do to both support employees in today’s online world and to look out for their own reputation, is to have sufficient and constructive security awareness programs. Not just monthly or quarterly PowerPoint decks of stock content, (although with the right content and a healthy surrounding discussion these can be valuable) but be more proactive – write blogs, short updates or announcements on relevant and newsworthy topics in privacy and security, recommend podcasts or new websites where employees can find an informed and useful take on such subjects, and help your employees to be good cyber citizens. Good communication is paramount when ensuring we as businesses are confident about the protection of data and our people's online safety.

By Alastair McFarlane – Head of Operations at Play

Background image of oval symbols

Bring your ideas to life with PLAY

We'd love to hear about your ideas and see if we can help.

arch image in background