Bot problem? Facebook estimates 8.7% of users are duplicate, miscategorized or spam accounts

Facebook says 8.7 percent of its monthly active user total might violate terms of service and be either duplicate, miscategorized or “undesirable” accounts meant for spamming, according to a filing with the Securities and Exchange Commission.

In its quarterly report, Facebook provided updated numbers and new details about illegitimate accounts, which could represent about 83 million users. The company estimates 4.8 percent of its 955 million monthly active users are duplicate accounts. For instance, a user may use one account for connecting with work acquaintances and another for family and close friends.

Facebook says 2.4 percent of accounts are likely miscategorized accounts where users have created personal profiles for a business, organization or pet. These entities should be represented on Facebook with pages, not profiles, according to the social network’s terms of service.

Facebook also estimates that 1.5 percent of monthly active users are “undesirable accounts,” which are false accounts that are created for spamming or other purposes that violate terms. Earlier this week, a music startup claimed that 80 percent of clicks on its Facebook ad campaign came from bots. Facebook says it is investigating the claims.

According to Facebook’s quarterly report, the percentage of accounts that are duplicate or false is significantly lower in developed markets such as the United States or Australia but higher in developing markets such as Indonesia and Turkey. The company says it creates these estimates based on an internal review of a limited sample of accounts. Reviewers identify names that appear to be fake and other behavior that appears inauthentic.

In March, Facebook estimated 5 to 6 percent of its 845 million monthly active users could be false or duplicate accounts. At that time, the company did not offer estimates about what percentage of these accounts were duplicate, miscategorized or otherwise undesirable.

Facebook now asks for more details when users report fake or abusive accounts

Facebook has updated its user flow for reporting fake or abusive accounts to include clearer options that could improve the level of feedback Facebook receives.

It’s unclear exactly when Facebook made this change, but now when users click “Report/block” from a user’s Timeline, they have the option to fill out a detailed report, as seen below. Users can select multiple reasons and confirm their report before sending it. There’s also an option to share additional information in a text field later in the flow.

Previously, Facebook presented some of these options under drop down menus that users might not have seen. Now it’s more intuitive and efficient to indicate what the problem with an account might be. This is the former report dialog:

Facebook estimates that 5 or 6 percent of its more than 900 million monthly active users are false or duplicate accounts. The social network has a number of systems in place to detect fake accounts, including looking at the rate of accepted and rejected friend requests, but manual user reports are an important part of discovering and eliminating those accounts. We’ve also heard that if an account is connected to several other accounts that have been reported, Facebook takes that as an indication that the account might also be a fake.

Removing false or abusive accounts is critical for Facebook to maintain a level of trust and usefulness for users. It is also important to give users the sense that their reports have an impact. Earlier this year, Facebook introduced a “support dashboard” for users to track the progress of their reports, and last year the company added several features to help users deal with unwanted photo tags and bullying posts.

Facebook shared a diagram of how its reporting system works here.

Facebook introduces new security features for protecting accounts through mobile devices

Facebook today announced new security features for mobile devices, allowing users to confirm logins made from new devices, report unwanted content from their mobile feed, and more easily recover compromised accounts.

The social network updated its login approvals process to make it easier for Android users to authorize logins that are made on new devices. With the new “Code Generator,” users will be able to receive login approval codes from their Facebook app rather than having to wait for a text message. This is an improvement over the previous system, which didn’t work if a user did not have cellular or Internet access. Android users can turn on Code Generator here. Facebook says it is working to bring this functionality to other devices.

Users of Facebook’s touch-enabled mobile site will now be able to hide or report content from News Feed. This is done by tapping the feedback icon beneath a story and tapping the icon in the upper right hand corner. Facebook says it is working to bring this feature to native mobile apps as well.

The social network has also brought its account recovery tool, which was previously only available on desktop, to mobile devices. Users can review unauthorized logins, reset their password, and take other actions to regain control of their accounts from their phones. See Facebook’s note here for more details.

Facebook adds app ratings and negative feedback metrics to insights tool

Facebook introduced app ratings and negative feedback metrics to the app insights tool today as a way to help developers gain a better understanding of how users respond to their apps.

Detailed on developer blog post, these new metrics allow developers to go beyond monitoring growth trends and gain. Facebook has used this data to determine what appears in News Feed, but developers previously did not have the ability to see these insights or track their performance over time. Facebook says app ratings and negative feedback will also factor into what apps appear in the recently announced App Center.

The new app ratings dashboard in app insights will show developers how users have rated an app on a scale of one to five stars. Developers can view ratings as absolute numbers or relative percentages across different demographics, including age and gender, country and locale. Before, developers could only see their apps’ average overall rating and could not determine how the rating varied over time or by demographic.

The negative feedback metric will show developers how many times people have chosen to hide stories from an app, reported stories as spam or blocked an app completely. These reports are the same ones Facebook uses in its automated systems to detect spam on the platform, so it will be useful for developers in monitoring their performance over time. The insights will include the ratio of negative feedback to total impressions as well as an overview chart that displays green when an app is doing well and yellow when an app has reached a level for concern. Developers will even be able to compare the type of feedback they get from specific action types, as well as compare how users versus non-users respond to their content.

Facebook partners with anti-virus software makers to provide free computer security

Facebook today announced the Antivirus Marketplace, a partnership with Microsoft, McAfee, Trend Micro, Sophos, and Symantec, to provide the social network’s users with free six-month licenses to anti-virus software.

The anti-virus partners will augment Facebook’s URL blacklist system with their own URL blacklist databases. The companies will also contribute to the Facebook Security Blog to provide information about how users can protect their accounts and keep Facebook safe for other users.

With 901 million monthly active users, the social network is a prime target fo malware, viruses, phishing attacks and spam. Facebook has made several improvements to its automated enforcement — including self-XSS protection and quickly blocking apps, profiles or links that a certain percentage of people mark as spam — and user-facing security protections, like login approvals and remote log-out, but providing anti-virus software adds another layer of security.

The company has worked with McAfee in the past to give users free and discounted anti-virus protection. Symantec, which produces Norton anti-virus software, has maintained an app on Facebook since 2010 that scans users’ News Feeds for unsafe links. The company also helped Facebook identify a data leak in 2011. Sophos is an interesting partner since the security software developer has been quite critical of Facebook in the past.

Users can visit the Facebook Security page to access the free software downloads and receive updates about protecting their accounts.

Facebook gives permalinks to individual comments, hides potential spam

Facebook now assigns permanent links to all comments on the site and hides spam comments rather than just marking them with a darker background. The company announced the improvements in a post on its Facebook + Journalists page.

With the addition of permalinks, users can share a direct link to any comment. When users visit the link, the comment will appear at the top of the page and will briefly appear highlighted in yellow. Previously there was no way to do this, and it could be difficult to find a particular comment among a thread of dozens or sometimes hundreds of others. Permalinks can be accessed by clicking the timestamp of a comment.

Facebook added permalinks to comments in its plug-in for third-party sites last year, but didn’t do this for the main site. Whether this was an issue of scale or lack of demand is unclear. However, with the increase in Facebook activity among public figures, more public conversations are happening on the site and being able to link to comments directly is important. On Twitter, for example, every tweet has a unique URL, making it easy to refer back to specific parts of a thread.

Other features might not be necessary when users interact with their friends on the social network, but as they engage with pages and popular people who allow subscribers, the deficiencies of comments on Facebook.com become more apparent. For example, Facebook doesn’t thread comments or sort them by relevancy on the site as it does with its plug-in. On Facebook.com, all comments are presented in a single thread. There is no way to clearly and directly respond to a comment from another user. Admins can @ tag people who have commented on a post, but users can only tag the names of their friends. (In Facebook groups, users can tag anyone in the group even without being connected as friends.) Comments are presented in order of when they were posted. However, the Facebook comments plugin used by websites including this one shows relevant comments from friends, friends of friends and the most liked or active discussion threads above others.

Comments on Facebook.com do have spam detection. Potential spam comments are not visible to other users, but they used to show to admins with a darker grey background. This would catch moderators’ eyes so they could delete the comment, block the user or unmark the item as spam. Now potential spam will be hidden behind an ellipsis. Page owners can click the ellipsis to see the comments and take action on them.

[Update 3/30/12 3:01 p.m. PT - Vadim Lavrusik, journalist program manager at Facebook, tells us this change is for public comments on personal Timelines that have the subscribe feature enabled, not pages.]

Facebook Reveals More Details About Timeline, Including an Approval Process for Open Graph Apps

“We’ve tried to be mindful about the lessons we’ve learned” Facebook Product Manger Manager Carl Sjogreen told me this morning when we sat down to discuss Timeline, the redesigned version of the user profile that debuted at f8 last month. He says that as the product rolls out over the next few weeks, Facebook will be manually reviewing and approving new Open Graph apps to prevent the spammy experience that emerged when temporarily gave third-party applications a place on the profile years ago.

This approach is much more similar to how Apple must approve apps before they enter the App Store than the way Facebook allows canvas apps to launch on its Platform without pre-approval. Sjogreen also revealed more details about Timeline, including that users will be given a curation period to manicure the content displayed in their new profile before it becomes visible to friends. Facebook believes that through social content curation and new lifestyle apps, users will be able to express themselves in more nuanced ways than ever before.

Timeline’s Impact on Privacy

Facebook launched Timeline to allow users to tell their story not just through their most recent activity as the old profile wall did, but through all of the most important moments of their life. Users can also authorize Open Graph apps to automatically publish activity such as song listens to their Timeline. Sjogreen says “All the feedback is pretty positive. People have complimented the design aesthetic”, which includes a place for a big banner image and provides users the flexibility to feature or hide different content.

Since a user’s friends can easily navigate all the way back to their first Facebook posts through Timeline, a lot of content that was previously difficult to access will become readily visible. This content might include major life events, but also objectionable or inappropriate posts users might have forgotten about but wouldn’t want family or professional colleagues to see.

No privacy settings have been changed and all Timeline content could previously be found by scrolling far enough down a user’s profile, but Timeline does allow historic content to be accessed with one or two clicks rather than dozens or hundreds.

To address this, when users receive the rollout of Timeline, Sjogreen says they’ll be given a curation period in which only they will be abe to see their Timeline so they can go back and hide content or adjust its privacy controls. They can then publish the Timeline and make it visible when they’re ready. Developers were given a similar curation period when they first received access to Timeline at f8.

Still, Facebook will need to carefully inform users of the importance of this curation period or they might skip it and make content visible that they might later regret. Sjogreen said he wasn’t aware of plans for this kind of messaging, though.

Regarding less appropriate content becoming visible, Sjogreen reflected Facebook’s goals of people becoming more open as well as cultural norm changes (privacy relaxing over time). “Timeline will be seen in a broader context. I think people understand that everyone went to college, everyone has a photo they posted to Facebook from college.” Everyone’s employers might not be so keen on seeing such racy party pictures or controversial status updates, though.

Timeline Apps Will Be Reviewed by Facebook

From 2008 to 2010, Facebook allowed users to install applications on their profile. While some conveyed important information such as where a user had travelled, Sjogreen told me that users would install “clowny apps” that they’d soon stop using, that would retain a prominent place on the profile with the intention of spreading virally.

Facebook gradually hid then finally removed all profile apps in 2010. It is now applying the lessons it learned from its first attempt at profile apps to create a less spammy experience this time around. Timeline is designed to show more recent activity, but increasingly weed out less important content as users scroll backwards. Sjogreen says “apps don’t have a permanent place in the Timeline” meaning if a user installs an app but stops using it, it will quickly become less visible.

Along the same lines, Sjogreen tells me Facebook will not reward apps that publish more frequently than others. For example, say a user listens to 100 songs on Spotify and tracks one run using Nike’s running app in a single week. Timeline might give the two apps equal real estate by only showing a report of a user’s most listened to songs but still showing news of the one workout.

“We’ve learned a lot in hindsight, and built a lot of technologies to make sure we’re targeting users with info they find relevant” says Sjogreen. By using its new Open Graph app activity sorting algorithm Graph Rank and other systems, Sjogreen tells me Facebook has reduced Platform spam by 99%, up from the 95% reduction in spam Facebook CTO Bret Taylor cited at our Inside Social Apps conference in January.

Developers are helping with this process by structuring the data about user activity that the send to Facebook. They can select from official verbs and nouns such as “listened” and “song” to let Facebook know what kind of content they’re submitting. Facebook can then determine that each song listen might be less important to display in Timeline than actions that occur less frequently such as meals cooked or movies watched. Custom actions and objects can also be configured by developers.

However, to ”make sure the initial experience with Timeline is really great” Facebook is now manually reviewing the submission of new Open Graph apps to check out their nouns, verbs, and what triggers an activity to be published.

This approval process differs significantly from its Games Platform, where developers publicly launch an app without needing permission from Facebook; apps only get reviewed by the company if they receive negative feedback from users. Sjogreen tell me that “something publishing every minute will get shut down quickly or never be approved in the first place. We’re trying not to get in the business of making value judgements like that knitting app is good and this joke app is bad, but we’re making sure apps are only publishing legitimate activity.”

Such an approach might make it harder for developers, but it should work well to protect the user experience from spam apps that constantly publish low quality stories to the Timeline and home page Ticker. Regarding whether this approach would scale when more and more developers begin submitting apps, Sjogreen says “this level of approval is different than us playing every game on the Platform and making sure it meets some quality bar.”

Facebook is preparing to make a major change to how users express themselves with the rollout of Timeline. It will need to clearly communicate the privacy implications of ready access to old content in order to avoid backlash. It will also need to strike a proper balance between a clean user experience and an attractive Open Graph application development Platform. If Facebook can navigate these two pitfalls, Timeline could become the richest way to represent one’s identity online.

Secret Whitelist Protects Top Facebook Page Management Tools From Having Posts Hidden in News Feeds

On Tuesday we published the results of a study indicating that Pages that sync or auto-post their content to Facebook from Twitter or blogs using tools like HootSuite, Twitter, and NetworkedBlogs receive significantly fewer Likes and comments per posts than those that post manually using Facebook’s web and mobile interfaces.

This is partly because Facebook consolidates into a folded thread all posts from across Pages and friends in a user’s news feed that were published through the same tool, displaying a “Show x more posts from [publisher app] link”.

We’ve now learned that Facebook maintains a secret whitelist of companies that are exempt from having content posted through their publishers consolidated across different Pages and clients. This protects them from a reduction in news feed impressions. The whitelist includes some top enterprise Page management tools from the Preferred Developer Consultant program including Buddy Media, Vitrue, Involver, Context Optional and Syncapse. Facebook has forbidden those included from discussing the existence of the whitelist. Facebook has confirmed with us that “trusted partners” are having their posts treated differently.

Since consolidation negatively impacts Page post engagement and other key performance indicators, brands have to consider using whitelisted publishing tools. If they aren’t already, they should out of necessity either ask their Page management solution provider about gaining admission to the whitelist, or switch to a tool protected from consolidation. Overall, the surfacing of the consolidation whitelist may anger developers not on it, and push Facebook to change its policy on whose posts are consolidated.

Here’s some more context on what’s happening. In order to gain the maximum exposure, clicks, and other key performance metrics from publishing to the news feed, Facebook Pages need to optimize their EdgeRank, or prominence in the news feed. To do so, they need to consistently publish compelling and widely seen updates to draw Likes and comment that improve their EdgeRank.

However, Facebook has an automated system in place originally designed to collapse flurries of posts published by users playing spammy social games. That system causes any posts present in a user’s news feed that were published by an API publishing tool with a same App ID, whether from one or many Pages or users, to be consolidated into threads that show one post but require users to click to unfold and view the rest of the posts. Since users don’t always unfold the threads, consolidation reduces the impressions of posts, giving them fewer opportunities to score feedback that helps their EdgeRank.

The study by EdgeRank Checker and another by Momentus Media show reductions in post engagement rates by as much as 70% for Pages using third-party publishing tools that have posts consolidated across Pages. This engagement reduction cannot be entirely attributed to consolidation, as differences in the content of scheduled or syndicated posts, Page size, and the types of companies that pay for third-party tools all impact engagement as well. Still, post consolidation does negatively impact impression rates, and therefore publishing apps that cause posts to be consolidated should not be used by brands.

To insulate some of the world’s biggest brands who are also heavy advertisers on Facebook, as well as some of the biggest third-party Page management companies from its Preferred Developer Consultant program, Facebook quietly offered admission to a post consolidation whitelist to a few Page management developers. Tools whose App IDs are whitelisted do not have their posts consolidated across Pages (though, in some cases, a single client’s Page may have its own posts consolidated together if it posts multiple times in rapid succession).

Brands using tools on the whitelist have an advantage over their competitors, as they can attain more news feed exposure for their posts. Page management companies can use the higher engagement rates afforded them by the whitelist to attract clients. Page management companies left off this whitelist may feel the double standard is unfair, especially if brands using Twitter, HootSuite, TweetDeck, or NetworkedBlogs ditch them for whitelisted tools.

Executives of Page management companies tell us they don’t believe Facebook was intending to penalize any publishing tool developers with the consolidation system, and rather it was a holdover from a spam prevention effort that Facebook has since handled by limiting how much game content appears in the news feed.

[Update: Facebook has responded to our inquiry about the existence of the whitelist saying "We're focused on ensuring that users see the highest quality stories in News Feed. As part of this, related stories are typically aggregated so users can see a consolidated view of stories from one app. In some cases, we work closely with trusted partners, such as Preferred Developer Consultants, to test new ways of surfacing stories, and gather feedback to improve the Platform experience."

Though Facebook calls this a "test", the exemption of certain tools from post consolidation has been going on for many months. The whitelist could therefore be interpreted as favoritism rather than just an attempt to gather data to improve the user experience.]

Exempting trusted publishers from post consolidation may have intended as a temporary solution until a more sophisticated way to keep individual publishers from overrunning the news feed could be developed. But in the meantime, the whitelist has created an uneven playing field where certain publishers and the brands that use them receive much less visibility in the news feed than others.

If Facebook wants to keep the long-tail of third-party developers happy and working on its Platform, it will need to provide more transparency around how the post consolidation system currently works. It will also need to quickly fix it so no publishing tools and their brand clients are penalized for legitimate promotion in an effort to control game spam.

[Thanks to Momentus Media for data that informed this post]

Facebook Adds “Hide All From [Advertiser]” Feedback Option to Punish Spammers

Facebook has confirmed that it is testing a new feedback option for the ads shown on its site, that allows users to block specific advertisers from reaching them. The “Hide all from [advertiser]” option is appearing to some users when they ‘x’ out an unwanted ad, in addition to the existing option to select why they they clicked to remove the ad.

When apps and Pages have their posts hidden from the news feed, Facebook’s quality ranking system decreases the prominence of that entity’s posts to all users. If Facebook applies the same quality ranking algorithm to ads, being hidden through the new feedback option could decrease the prominence of an all of an advertisers’ ads. This could encourage them to use more responsible, less spammy ad creative to avoid being hidden.

Alternatively, if Facebook doesn’t apply the quality ranking system, being hidden might actually improve an advertiser’s click through rates because those who otherwise wouldn’t click can exempt themselves from impressions. Either way, if rolled out the ad feedback option could improve the Facebook experience for those sensitive to the content of the ads they see.

Facebook has long allowed users to provide feedback on ads, providing users with choices such as “uninteresting”, “misleading”, “sexually explicit”, or “repetitive” when they ‘x’ out an unwanted ad. This data helps Facebook refine its ad targeting algorithm, identifying if certain types of ads are relevant to a user, or are being shown too frequently.

We also assume that advertisers receiving negative marks about the content of their ad creatives are subject to reprimand or throttling of the placement of their ads. This would keep advertisers from using aggressive or spammy tactics to boost CTR at the expense of the user experience.

Now when some users ‘x’ out add, they see the option to either “Hide this ad” or “Hide all from [advertiser]“, in the case of our example “Hide all from Buy South Africa Online”. If a user chooses the latter, they’ll see the message “Ads hidden. We’ll try not to show you ads from [advertiser]“. The term ‘try’ is likely used because advertisers could reach users that have hidden them by creating new ad accounts under different names.

Users then have the options to select why they hid the ad, or unhide the advertiser. Facebook recently disabled a number of apps that were receiving high volumes of negative feedback on their news feed and wall posts. This led to an outcry about a lack of transparency around enforcement, so Facebook launched feedback analytics and benchmarks for apps, so developers could determine when they were being too spammy.

Facebook explained that apps receiving negative feedback would see negative impacts on their EdgeRank, or the prominence of their posts in the news feed. It’s believed that a similar system punishes spammy Pages.

That same quality ranking system could apply to advertisers as well, and the “Hide all from [advertiser]” option would give users a way to explicitly fight back against those showing them objectionable ads. Advertisers receiving high volumes of negative feedback could possibly have their ads shown in lower positions in the ad stacks that appear in Facebook’s right sidebar.

By increasing the repercussions for aggressive or spammy advertisers, Facebook may be able to provide a more appealing browsing experience, and attract high quality brands to market on its platform

How to Effectively Manage Critics, Trolls and Spammers on Facebook Pages

Facebook Marketing Bible

The following is an excerpt from our Facebook Marketing Bible. The full version contains detailed strategies for dealing with each type of disruptive commenter and three more key tactics for making your Page an inviting community.

A well-managed Facebook Page allows businesses of all sizes to build a large and engaged community of fans, many of which can and will become loyal customers and advocates of the brand if nurtured correctly and consistently.

As your Page grows in popularity and starts to attract hundreds and thousands of Likes it will also begin to see unwelcome attention from the less-savory members of larger online communities – critics, trolls and spammers. While this is a largely unavoidable side effect of popularity, Page administrators can take steps to ensure that these kinds of members are controlled and removed.

Know Your Enemy

Critics - Critics are commenters that hurt a brand’s image by filling its Page wall with negative assessments of the brand’s identity, products, or services. They can be difficult to identify and manage, as they can veer between being your biggest fan to most outspoken naysayer from one moment to the next.

Trolls - A troll is someone who consistently posts inflammatory, negative and disruptive messages to your Facebook Page, with the sole intent of provoking an emotional reaction amongst the other members of your community. Trolls differ from critics in that they usually have no actual interest in the brand’s products and services, but are simply there to cause problems.

Spammers - The rate of spam that is posted on any given Facebook Page is exponentially linked to the number of Likes that it has. While Facebook’s spam filters will do their best to identify and move spam to your Wall’s hidden wall tab, this is at best a hit-and-miss affair and some spam will get through

Admins can employ the following tactics to ensure that their Page is optimized to recognize and manage problem users.

1. Create a Customized Page Rules Tab

One of the smartest things all Facebook Page Admins can and should do as soon as possible is implement a customized Page rules tab that clearly lists the behavioral expectations of members of the community.

Coca Cola’s House Rules is one example of how this can be done.

This tab will give you something to point to if users ask why they or someone else was banned. The tab is also likely to make all community members who see it more civil.

2. Take It To Email

Facebook Pages do not provide any kind of private messaging system, but sometimes a customer needs to be engaged on a one-to-one basis, and the best way to do this is to recommend directly to them that they contact you via email. This has numerous benefits – the customer can speak more freely, you can provide a more personal level of support and if the matter gets heated it doesn’t have to be a public affair.

If you feel that a customer has a legitimate enquiry but that public correspondence might become disruptive to the Facebook Page or even damage the reputation of the brand, it’s good advice to move things to email as soon as possible. Reply to their comment with your customer support email address or another email address they can reach you at and kindly ask them to follow up with you via email.

If you have made the decision to have a brand presence on Facebook then the business of moderating your Page needs to be taken seriously, with the correct level of resources made available to meet the expectations of your fans as the Page grows in size and stature.

The rest of our strategies for handling disruptive commenters and improving the civility of conversation on your Facebook Page can be found in the Facebook Marketing Bible, Inside Network’s comprehensive guide to marketing and advertising through Facebook.

Get the latest news in your inbox
interested in advertising with inside facebook?

Social Media Jobs
of the Day

SEO Strategist

Hanley Wood
Washington, DC

Vice President of Technology

Emerald Media Group
Eugene, OR

Social Media Specialist

California Academy of Sciences
San Francisco, CA

Mobile Application Developer

California Academy of Sciences
San Francisco, CA

Opportunities in Digital Publishing

Pew Research Center
Washington, DC

Featured Company

Join leading companies like this one and recruit from the nation's top media job seekers on the Mediabistro Job Board. Every job post comes with our satisfaction guarantee. Learn More
 

Our Sponsors

Also from Inside Network:   AppData - Facebook & iOS Application Stats   PageData - Engagement Data on Facebook Pages   Facebook Marketing Bible   Inside Network Research
 
home | site map | advertising/sponsorships | about | careers | contact us | help courses | browse jobs | freelancers | events | forums | content | member benefits | reprints & permissions terms of use | privacy policy Copyright © 2014 Mediabistro Inc. call (212) 389-2000 or email us