Stories

How Facebook has become a hub of fake propaganda misinformation due to millions of fake profiles without any verification control in 2022

How Facebook has become a hub of fake propaganda misinformation due to millions of fake profiles without any verification control

When it comes to distributing fake news, Facebook is the worst offender. It’s even worse than Google. It’s even worse than Twitter. And it’s even worse than webmail services like AOL, Yahoo!, and Gmail.

According to a new study published in Nature: Human Behavior, this is the case.

In the run-up to the 2016 presidential election, a team of academics led by Princeton University’s Andrew Guess monitored the internet usage of over 3000 Americans. They discovered that Facebook was the referrer site for untrustworthy news sources 15% of the time. On the other hand, Facebook only led users to authoritative news sources 6% of the time.

“This pattern of differential Facebook visits immediately preceding untrustworthy website visits is not evident for Google (3.3 percent untrustworthy news against 6.2 percent hard news) or Twitter (1% untrustworthy versus 1.5 percent hard news),” the authors write.

What impact do fake news websites have on people’s political opinions and voting decisions? The authors admit that estimating this is difficult, but they believe it has a more negligible impact than is usually assumed.

For one thing, they point out that altering a voter’s mind is a difficult task. According to one estimate, only 1-3 persons out of 10,000 will change their vote choice due to seeing a political commercial on television. Instead, consumers looking for fake news on Facebook and other digital platforms are most likely doing so to reaffirm previously held thoughts and opinions.

Additionally, the researchers discovered that while a considerable majority of Americans (44.3%) visited at least one untrustworthy news site during the final weeks of the 2016 presidential campaign, it did not meet their demand for hard news. The scientists wrote that “those that read the most challenging news also consume the most information from untrustworthy sources – in other words, they appear to be complements, not substitutes.”

However, the researchers discovered that Trump fans were considerably more likely than Clinton supporters to visit untrustworthy news sites: over 57% of Trump supporters read at least one false news piece in the month leading up to the 2016 election, compared to only 28% of Clinton supporters. In addition, older Americans were more inclined to browse shady news websites.

The observed “stickiness” of false news websites is perhaps the most concerning aspect. According to the researchers, users spend an average of 64 seconds reading faulty news pieces vs only 42 seconds reading real news stories.

More research is needed to determine the extent to which fake news can affect public opinion. Until then, the scholars have come to the following conclusion:

“Our findings on the association between untrustworthy website usage and voter turnout and vote choice are statistically inaccurate; we can only rule out very big effects,” says the researcher.

However, it is undeniable that Facebook served as a “key vector of distribution for dubious websites.”

What’s going on right now isn’t anything new. It’s part of a long and challenging history, and it throws light on a range of social, economic, cultural, technological, and political aspects that aren’t easily solved. While rushing to put Band-Aids in place may feel nice, I’m concerned that this strategy may create a distraction while allowing the underlying issues to fester.

Let’s start with a typical “fix” I’ve heard from solutions: force Facebook and Google to “address” the problem by identifying and blocking “fake news” from spreading. Though I understand the anger with technology companies’ ability to reflect and exaggerate long-standing social dynamics, regulating or forcing them to come up with a silver bullet answer isn’t going to work.

From my perspective point, this technique instantly reveals three difficulties of varying scales:

  1. Even though a crazy amount of words have been spent describing “fake news,” no one can agree on a definition.
  2. People don’t appear to grasp the problem’s changing nature, how manipulation evolves, or how the solutions they provide might be misappropriated by others they genuinely disagree with.
  3. No amount of “fixing” Facebook or Google will solve the underlying reasons influencing America’s present culture and information conflicts.

What exactly is “Fake News”?

Were you tricked by fake Russian accounts? Facebook will soon tell you

I’m not going to try to come up with a perfect definition of “fake news,” but I want to draw attention to the intertwined tropes at work. This framing is used discursively to highlight problematic content, including overtly and accidentally erroneous information, sensational and fear-mongering headlines, harsh and incendiary speech expressed in blogs, and propaganda of all shades (driven by both the State and other interests).

I’ve observed the use of such nebulous terms (bullying, online community, social networks, and so on) for a variety of political and economic purposes, and I’ve consistently found that without a precise definition or a clearly articulated problem, all that is achieved is a spectacle by inciting conversations about the dangers of XYZ.

I see “fake news” being used as a new framing device to advance long-standing objectives and commitments. This is true for both researchers who have long criticized corporate power and conservative pundits who love to use this paradigm to justify their contempt for the mainstream media. While a result, dozens of meetings on “fake news” are being convened as people wring their hands for a solution; in the meantime, commentators and advocates of all shades are calling on companies to remedy the problem without even attempting to define it. Some people are fixated on “accuracy” and “truth,” while others are more interested in how content shapes cultural frames.

Meanwhile, inside internet platform firms, people are battling to design content regulations that can be applied consistently. I’m constantly surprised by how divided people are over what should and shouldn’t be prohibited under the banner of “fake news”, – and I’m generally talking to specialists.

It doesn’t help to open up the procedure. When the public is encouraged to report “fake news,” men’s rights activists accuse feminist blog articles criticizing patriarchy of being “false.” Teenagers and trolls challenge almost everything.

Finding a neutral third party isn’t any better. Even experts can’t agree on what constitutes a hate group and protected speech. (I like the SPLC list, but that reflects my political leanings.) And even that list does not include all of the hate groups that progressives might want to identify.) Just ask people how they feel about banning tabloid journalism or Breitbart, and you’ll get many different answers.

Although much of the focus in the “fake news” debate is on widely disseminated and outright ridiculous content, much of the most harmful content isn’t in your face. It isn’t broadly disseminated, and it isn’t shared by individuals who are transmitting it to objects. Its subtly correct content is skewed in presentation and framing, prompting people to draw harmful conclusions that aren’t directly stated in the range. That’s the power of provocative speech: it makes people think by allowing them to connect the dots rather than simply pushing a concept down their throats. That information is significantly more persuasive than information claiming that UFOs have landed in Arizona.

As we are distracted by content created for financial benefit (which is often forwarded more by those who are shocked by its inaccuracy than by those who believe it), innumerable actors are developing the ability to manipulate others through content that is far less easily detected. They’re attempting to hack the attention market, and they’re iterating as people try to block various things. This is why “meme magic” is so powerful: it involves setting specific frames and logic that can be activated by memetic reference, making it increasingly difficult to halt.

Inappropriate content is increasingly becoming visual rather than text-based, making it even more challenging to comprehend and resolve due to its reliance on a wide range of cultural references and symbols. Some might interpret it as humorous or critical, while others perceive it as reifying and affirming ideas. Look up how the swastika is employed in cartoons for various political comments. How you interpret those images has a significant impact on seeing the swastika.

How to Recognize Fake News

Facebook claims to be committed to minimizing the spread of fake news on the platform. They deactivate bogus accounts and reduce financial incentives for people who spread false information. They also use signals like feedback from our community to spot potentially fake tales. Independent third-party fact-checkers stories rated as inaccurate appear lower in Feed-in countries where we work with independent third-party fact-checkers. If Pages or domains create or share incorrect information regularly, we restrict their dissemination and take away their advertising privileges. They’re also using tools like Related articles to provide readers with additional context on stories to select what to read, trust, and share for themselves.

  • Learn more about Facebook’s people and products by watching “Facing Facts,” a short video on our fight against misinformation, or visit Inside Feed, a site dedicated to putting light on Facebook’s people and products.
  • Here are some things to keep an eye out for as we attempt to minimize the spread:
  • Take headlines with a grain of salt. Catchy titles in all caps with exclamation points are standard in fake news reports. If the headline’s astounding allegations sound ridiculous, they probably are.
  • Take a good look at the link. A counterfeit or similar link could be an indicator of fake news. Many fake news sites imitate legitimate news sources by changing the link slightly. You can check out the site to see how the connection compares to other well-known sites.
  • Look into the source. Ascertain if the story is authored by a reliable source with a track record of accuracy. If the report is from a new organization, go into their “About” section to learn more.
  • Keep an eye out for strange formatting. Many fake news websites contain misspellings or have clumsy layouts. If you see any of these indicators, read them carefully.
  • Take a look at the images. Manipulation of photos or videos is standard in fake news reports. The picture may be genuine, but it was taken out of context. You may look up the photo or image online to see where it came from.
  • Examine the dates. Timelines that don’t make sense or event dates that have been changed may be found in fake news reports.
  • Examine the evidence. Verify the author’s sources to ensure they are correct. A lack of proof or reliance on anonymous experts may indicate a fake news report.
  • Examine additional reports. It may be a hoax if no other news organization is reporting the same story. It’s more likely to be accurate if the story is written by numerous sources you trust.
  • Is the story intended to be funny? False news reports can sometimes be challenging to differentiate from satire or humor. Check to see if the source is recognized for parody and if the facts and tone of the narrative indicate that it’s merely for fun.
  • Some of the stories are blatantly fake. Consider the stories you read critically, and only share information that you believe is reliable.

Solutionism with Issues

The Real Ethics of Fake News - Center for Media Engagement - Center for Media Engagement

To my dismay, despite increasing pressure on firms to do something — anything — I have yet to see a comprehensive suggestion for which information should be removed and how. It’s simply a matter of “them” doing it. Don’t get me wrong: there are several excellent low-hanging-fruit ways for severing economic ties (although Google killing off AdSense for some sites has prompted other ad networks to step in). And I support suggestions that attempt to eliminate clickbait-style forwarding without reading – this requires people to conduct more research before sharing something based on a title. But they are just rounding errors in the ecosystem, even though some people appear to believe otherwise.

When I was an “ethnographic engineer” at Blogger a decade ago, I spent a lot of time sifting through customer service complaints, sampling blog articles and comments at random, and developing modest tools to understand the young blogosphere better and address inappropriate content. I was amazed by the sheer inventiveness of people who were able to manipulate any well-designed feature that we deployed, as I was by my earlier work on Usenet and my subsequent mapping of Twitter practices—see, for example, the emergence of pro-ana in reaction to efforts to block anorexia content.

When AOL and other services began blocking references to “anorexia,” people who identified anorexia as a lifestyle began cryptically referring to their friend “Ana” as a coded manner of discussing anorexia without triggering the sensors. Attempts to prohibit the use of certain words typically result in creative workarounds.

These dynamics have existed for a long time. Many people interested in technology are familiar with the war that arose between spammers and organizations over email, as well as the many interventions that appeared to prevent spam. (Unbeknownst to most decentralization proponents, Google’s email centralization was arguably more effective than any other intervention.) Another conflict that continues to afflict the ecosystem is search engine optimization. (This one has been curtailed primarily by “personalization” algorithms that make bulk targeting less successful, which is surprising for most anti-surveillance activists.)

Part of the current “fake news” discussion is that Google and Facebook have an effective monopoly on online information flows in some areas of society. Because centralized systems have been able to control spam and SEO in specific ways, they should be able to stop this—except that people don’t like the “personalization” solution that developed in 2016 in response to previous complaints about inappropriate content.

Unfortunately, the “fake news” problem space is vastly different from the spam or SEO problem spaces. For starters, the intentions of those wanting to influence content are pretty nebulous. We’d be having a different conversation if it was just about money. Look at how modern product marketing works even in the money-centric world.

Even if the goal were to stop the most egregious lies for financial gain (or even merely deceit in business, as the FTC calls it), that conversation wouldn’t be quick or easy – people forget that spam/SEO iterations lasted decades to get at the current status quo (which is still imperfect but less visible, especially to Gmail users and sophisticated American searchers).

These are global challenges with no appropriate regulatory or rational method for determining what is real and what is not. Whack-a-mole is a high-risk game where the stakes are enormous.

Try drafting a content policy that you believe would be effective. Then consider how enforcing that regulation would result in the abolition of acceptable practices. Next, analyze how your opponents might circumvent your policy. This is what I did at Blogger and LiveJournal, and I can’t begin to describe how difficult that task was. I can’t tell you how many photographs I’ve seen blurred the line between pornography and breastfeeding. These lines aren’t as straight as they appear.

I don’t want to absolve businesses of accountability since they play a role in this ecosystem. However, they will not produce the silver bullet that has been requested. And I believe that most of these corporations’ opponents are foolish if they think that this is a simple problem for them to solve.

How to Report Fake News and Misinformation on different social media accounts, Facebook, Twitter, and Instagram

You may take measures to report disinformation on social media if you find it. Although not all social media platforms have established criteria for fake news, you should say it if the misinformation is aggressive or damaging. It’s worth noting that the process for doing so varies by platform:

Facebook- When you see a post on Facebook that contains purposefully misleading information, click the ellipsis in the top-right corner of the post. Select “Find Support or Report Post” from the drop-down menu. To report the post, select “False News” and then “Next.” Similarly, you can click on the ellipsis at the top of the page and pick “Find Support or Report Page” if you encounter a page full of disinformation, such as a Facebook Group that spreads dangerous conspiracy theories. To report the page, select “Scams and Fake Pages” and then “Next.”

Instagram– If you think a post with incorrect information is spam or abusive, you can report it by clicking the ellipsis (or three vertical dots, if you’re on an Android device) in the top-right corner of the post, then tapping “Report” (on mobile) or “Report inappropriate” (on desktop) (on desktop). To report a user, go to their profile page and click the ellipsis next to their name, then “Report user.” You’ll be provided with some on-screen directions for finishing your report once you’ve done so.

Snapchat- To report an abusive post on Snapchat, press and hold the screen until a flag appears at the bottom. To write the material, tap the flap. Press and hold on to a user’s name if they are spreading false information detrimental to others. At the bottom of your screen, a menu will appear. It would help if you tapped “More,” than “Report” at this point.

Twitter To report an account, go to their profile page, click the ellipsis next to their name, and then select “Report user.” After that, you’ll get some on-screen directions for finishing your report.

Scams that are common on Facebook

When Is Fake News Propaganda?

Romance scams: Scammers pretend to be divorced, widowed, or in a poor marriage and send romantic messages to people they don’t know. They’ll form online contacts in the hopes of collecting funds to pay for plane tickets or visas. They want to earn your trust, so the chats could go weeks before asking for money.

Lottery scams: Lottery scams are frequently perpetrated by accounts or Pages pretending to be someone you know or an organization (such as a government agency or Facebook). The emails will claim that you are one of the lottery winners and that you can get your money in exchange for a little advance charge.

Loan scams: Scammers send messages and write posts claiming to be able to provide rapid loans with low-interest rates in exchange for a small advance charge.

Theft of an access token: You are sent a link that demands access to your Facebook account or Page. Although the link appears to be from legitimate software, it is a mechanism for spammers to obtain access to your account and disseminate spam.

Job scams: Scammers use false or deceptive advertising to obtain your personal information or money. Avoid job advertising that appears too good to be accurate or requires payment up ahead.

When clicking on a link from a job offer, be wary of websites that appear unrelated to the original job posting or request sensitive information (such as a government ID) but do not employ secure (HTTPS) browsing. For additional information, see our Facebook job search guidelines.

Getting Past the Surface

Too many individuals believe that you can create a robust programme that clearly defines who and what is wrong, put it in place, and voila—problem solved. However, everyone who has dealt with prejudice and intolerance understands that ineffective band-aid solution. They may make things appear invisible for a while, but hate will continue to spread unless the root causes are addressed. Everyone, including businesses, must address the fundamental patterns that are duplicated and amplified by technology.

Scholars and journalists have written about the intersection between intolerance and fear, inequality, instability, and other issues. Although most technologists expected their technologies to be utilized to bridge gaps, this did not happen. But it’s not just about technology. Journalism’s principles aren’t being realized in this incarnation. Even the objectives of market-oriented capitalism aren’t recognized in the current distorted manifestation, where money can manipulate business (and everything else) for greedy purposes.

Part of the fascinating challenge is that we’re all caught up in a broader, terribly dysfunctional system. Sure, those individuals are particularly greedy or wicked in their motives, but evil is pervasive.

So, how do we direct our collective rage, aggravation, and energy in a way that extends beyond Band-Aid fixes? How do we overcome the need to hide the distance and focus on rebuilding social infrastructure and bridging social divides? Above all, how do we avoid playing into polarization’s hands?

The design imperative that we must prioritize, in my opinion, is to develop social, technical, economic, and political institutions that allow people to understand, appreciate, and bridge various points of view. Too much technology and media were designed with the assumption that simply making information available would suffice. This is not the case, as we now know. So let’s make that aim the focal point of our lobbying and development efforts, and see what we can do if we make it our top priority. Consider what might happen if venture capitalists and investors requested goods and solutions to bridge the socio-economic divide.

How can we go beyond the here and now to create the social infrastructure of the future? I’m not sure we have the willpower, but I believe that is part of the issue.

The puzzles revealed by “false news” are challenging to solve. They are difficult socially and culturally. They make us think about how people build information and ideas, connect with others, and build societies.

They’re also a shambles, displaying schisms and schisms in views and attitudes. That means they’re not straightforward to design or apply from a technological standpoint. We can’t just throw money over the wall and urge them to mend the problematic portions of society that they helped increase if we want technical solutions to complicated socio-technical concerns. To address the issues that we can all agree are broken, we need to work together and form coalitions of organizations that do not share the same political and social objectives. Otherwise, we’ll be waging a cultural war with corporations acting as the middleman and arbitrator. And that appears to be a terrible idea.

How does Facebook determine whether or not a user is a fake account?

Within the restrictions of our measurement systems, we believe bogus accounts are measured appropriately (which facebook discloses in their CSER guide and SEC filings). However, while reporting fake accounts is an industry practice — and something facebook frequently requested to do – it’s not the best way to look at things:

Simple attacks distort the number of bogus accounts used, as they don’t represent genuine harm or even a significant risk of injury. Suppose an inexperienced lousy actor tries to launch an attack by creating a hundred million false accounts, and we remove them as soon as they’re generated. In that case, that’s a hundred million bogus accounts that have been taken down.

  • However, no one is aware of these accounts. Thus, no harm has been done to our users. These accounts are never deemed active and are not included as monthly active users because they are removed so soon.
  • Because it displays what percentage of active accounts are likely to be false, prevalence is a better method to comprehend what’s going on on the site.
  • The prevalence number of phoney accounts includes both scathing and user-misclassified reports (a classic example of a user-misclassified account is when a person creates a profile for their pet rather than a Page), but only abusive statements cause harm.
  • We concentrate our enforcement efforts on abusive accounts to both prevent harm and avoid taking action on accounts that aren’t abusive.

In this way:

We suggest concentrating on enforcement report indicators that are connected to actual content violations, and In the future, we’ll see whether there’s a better approach to reporting bogus accounts. Overall, we remain optimistic that most Facebook users and activities are confirmed.

Fake Accounts: How Do We Enforce and Measure Them?

Social Media and Fake News in the Post-Truth Era: The Manipulation of Politics in the Election Process*, Articles Turgay Yerlikaya, Seca Toker Aslan | Insight Turkey

When it comes to abusive phoney accounts, our goal is simple: identify as many as possible while eliminating as few open charges as feasible. We do so in three ways, and data from each is included in the Community Standards Enforcement Report to give a complete picture of our efforts:

  1. Preventing fake accounts from being formed: The most effective strategy to combat false statements is to prevent them from being created in the first place. We developed detection software to detect and block accounts even before they’re made. Our algorithms monitor various indicators to see if funds are generated in bulk from a single location. A simple example is preventing specific IP addresses from accessing our systems and, as a result, from creating accounts.

What they measure: The data in the report on fake accounts does not include failed efforts to create false statements that we have blocked at this time. For example, we prohibit entire IP ranges from even accessing our site; thus, we cannot know how many tries to register an account we’ve blocked. While these efforts aren’t included in the report, we estimate that these detection technologies prevent millions of bogus accounts created every day.

  1. Accounts are removed as they sign up: By recognizing signals of malicious conduct, our powerful detection systems scan for suspected phoney accounts as soon as they sign up. These technologies combine indications such as suspicious email address patterns, suspicious activities, and other signals previously connected with other phoney accounts we’ve eliminated. The majority of the funds we presently remove are blocked minutes after they are created before they can cause any harm.

What they track: The accounts we disable at this point are included in our fake account actioned metric. Changes in our accounts actioned numbers are generally consequences of unsophisticated attempts like we experienced in the last two quarters. Even though they pose little risk to users, these are very easy to spot and can completely dominate our numbers. A spammer, for example, would try to establish 1,000,000 accounts in a short period using the same IP address.

Our systems will detect this and swiftly delete the bogus accounts. Although the charges were terminated so quickly that they were never considered active, they could not contribute to our projected prevalence of false reports among monthly active users; our publicly stated monthly active user figure, or any ad impressions.

  1. Deleting accounts that have already been added to Facebook: Some charges may be able to slip past the first two barriers and still be added to the network. This is frequently because they do not immediately show evidence of being fake or malicious, so we give them the benefit of the doubt until they show signs of negative behaviour. These accounts are discovered when our detection algorithms notice suspicious activity or when Facebook users report them to us. We use a variety of signals about how the fund was established and is being used to identify whether it is likely to be false and disable those who are.

What they measure: We count the accounts we remove at this stage in our accounts actioned metric. We would include these accounts in our prevalence metric if they are active on the platform. The number of false operational reports among our monthly active users within a specific period is measured by the prevalence of fake accounts. Over 99 per cent of the funds we remove, both at sign-up and those already on the platform, are proactively recognized by us before they are reported to us. In the report, we use that data as our proactive rate indicator.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker