Today marks 15 years since the service then known as TheFacebook was launched. This could be the time to trot out a list of all of Facebook’s accomplishments. But as the company reaches its teenage years, the most important thing is not what Facebook has accomplished, but how the environment it’s operating in has changed.
Facebook’s next 15 years are going to look very different than its first 15, because it’s no longer a scrappy startup, but one of the largest communication platforms in the world. The world no longer looks at companies like Facebook that let you connect with anyone anywhere in the world with amazement. Instead, they wonder with a sense of dread how they might turn their family and friends into conspiracy theorists, or how long it will be before trolls bombard them with hate speech.
Five years ago, the last time Facebook had a significant birthday, the company had proven to Wall Street that it was successfully transitioning to mobile, after having its first quarter where mobile advertising revenue topped $1 billion. It had also recently launched its Internet.org program (now known as Free Basics) to bring internet access to more developing countries, with a paper entitled “Is Connectivity a Human Right?” Positioned as a humanitarian effort, commentary about how it was also a way for Facebook to make more money by getting more people using its service was relegated only to a paragraph or two in press write-ups of the initiative. Facebook extended its reach with pricey acquisitions of WhatsApp and Oculus.
And for the next few years, Facebook continued to fly high. Sure, some users may have been a little freaked out upon learning in 2014 that Facebook secretly ensured some people saw more negative or positive posts as part of a research project to examine the emotional impact of Facebook, or that apps developed on Facebook’s platform could collect your data via your friends. But it was nothing that an apology and a pledge to do better couldn’t fix. Most of these news stories barely made it into the average Facebook user’s radar, if at all.
That started to change in 2016. The year prior Facebook had started to take a greater role in distributing news content through Instant Articles and Trending Topics, as a greater and greater portion of Facebook users reported not just sharing photos and status updates on Facebook, but also getting their daily dose of news through the platform. The company’s foray into news came right in time for the U.S. presidential election, which proved to be one of the most divisive in recent memory.
Ask many U.S. Facebook users, and they could likely pinpoint a time in the run-up to the 2016 election where one of their Facebook friends shared a fake news story that was obviously fake to them — such as that President Donald Trump had been endorsed by Pope Francis, or that Hillary Clinton was on her deathbed — but clearly not to their friends. Or, when arguments about which candidate to support broke out in the comments section on one of their posts.
Beyond the anecdotes, numerous articles found that the amount of pages and accounts dedicated to spreading hyperpartisan fake news were becoming more active on Facebook, Twitter, and YouTube. Sometimes these fake news articles were spread even further by Facebook itself. Its algorithms would insert fake news stories into Trending Topics — a mistake that was made with increasing frequency after Facebook fired its human editors for Trending Topics over claims that they were suppressing conservative news stories.
It gave Facebook users all the more reason to either take a break or reconsider their use of Facebook. Then the Republican candidate, Donald Trump, was elected President. Given that research has shown conservative voters in the U.S. are more likely to share fake news, it begged the question of whether or not fake news played a role in electing President Trump. While fake news was spread on Facebook, Twitter, and YouTube, Facebook became most associated with it because it had the most users.
Zuckerberg tried to pump the breaks on that pretty quickly. Speaking at a tech conference held just days after the election, Zuckerberg called the notion that fake news being shared on Facebook influenced in the election in any way a “pretty crazy idea.” Voters make decisions based on their lived experience.” It was the real life version of the “nothing to see here” GIF. To users who saw their family and friends spend more and more time on Facebook sharing fake news, what happened on Facebook was their lived experience. Zuckerberg quickly walked back his comments, but the damage was done — it was now crystal clear to Facebook users that the company either wasn’t tuned into what was actually happening on their platform, or trying to ignore it.
And it wasn’t just in the U.S. that it was happening. In the period leading up to the U.K.’s Brexit referendum, or the Philippines’ election of hardliner Rodrigo Duterte, or most recently, Brazil’s election of Jair Bolsonaro, users turned to Facebook or its other apps — Instagram, WhatsApp — to either share hyperpartisan political news, or swat down fake news. Some fake news was spread on Facebook not just to encourage users to vote for a particular candidate, but to serve as the pretext for ethnic cleansing, as was the case in Myanmar.
Fears about fake news in the U.S. became more pronounced when it was revealed that when of the purveyors of fake news was the Russian troll farm Internet Research Agency, spread by fake profiles that falsely claimed to be run by people in the U.S.
But Facebook’s biggest scandal in the U.S. to date came when it was revealed in March 2018 that Facebook had failed to stop Cambridge Analytica — a data analytics firm employed by President Trump — from improperly obtained data on nearly 87 million Facebook users. Cambridge Analytica then used that data to create psychological profiles of U.S. voters, and used that to target ads to them on Facebook to either convince them to stop.
It crystalized for many Facebook users how the data they shared on Facebook could be used to create ads with the express purpose of manipulating them. And, that it could have real-world consequences — in this, case, leading to the election of President Trump (though the effectiveness of Cambridge Analytica’s ad targeting is highly disputed).
It’s hard to overstate how great of an impact the Cambridge Analytica saga has had on Facebook’s activities from March 2018 up until now. It prompted Mark Zuckerberg to testify in front of both chambers of Congress for the first time, got the company to conduct a widespread audit of developers on its platforms, to commit to creating a button that would allow users to clear the data associated with the account, and increased the sense of urgency with which Facebook moved to implement other measures to fight fake news, like rolling out identify verification for political advertisers, and working with Twitter, Google and other platforms to spot and remove foreign influence operations more quickly.
So have the last few years of events left Facebook more vulnerable to losing users and advertisers? Not if its last earnings call is any indication. Facebook celebrated the fact that it generated a record $16.9 billion in revenue, and despite seeing flat growth in most of North America and Europe, continues to see it user count climb to 2.32 billion monthly active users and 1.52 billion daily active users.
Some have expressed surprise that Facebook hasn’t seen its user count dip more after years of news about how its service is being used to spread propaganda and suck up your data. But it’s unrealistic to expect a service that has more than a billion users to shed hundreds of millions of users in a quarter, or even a year.
It will be hard to predict what Facebook’s next 15 years will look like until we know how more countries around the world will start to regulate Facebook and other social platforms — whether that’s through data privacy laws like GDPR, or laws like Germany’s, which will require Facebook to invest more resources in deleting hate speech.
But what’s clear as Facebook turns 15 is that users have more reason than ever to be hesitant about staying on the platform. The longer anyone uses a particular service, the more likely they are to have a bad experience with it that makes it reconsider whether it’s worth continuing to use it. But users are also more attuned than ever to the dark side of social media. They no longer just look at it as a fun tool to broadcast your thoughts to the world and share photos with family and friends, but a tool that can distort their sense of what’s real and what’s fake. They might not ditch the service tomorrow — but the thought of doing so is firmly engrained in the back of their head.
What that means is Facebook can no longer acquire a new messaging app, unveil tech that lets you read through your skin, or build an app that collects all of a user’s smartphone activity for the purpose of market research without lawmakers and user at best expressing fleeting skepticism about it and at worst being creeped out by it. I don’t think that’s a bad thing. As the last five years have shown, what happens on Facebook has real-world consequences. And it would do the world better — from Wall Street to Silicon Valley — to think more about those, rather than just celebrate when Facebook hits a new user or revenue milestone.