As much of the digital world deals with leaks and hacks, Facebook knows that keeping user data secure is the most important thing it can do.
The company’s security team explained in a blog post, written by Security Engineer Chris Long, how they keep the password safe:
Our team wanted to do something to improve this situation, so we built a system dedicated to further securing people’s Facebook accounts by actively looking for these public postings, analyzing them, and then notifying people when we discover that their credentials have shown up elsewhere on the Internet. To do this, we monitor a selection of different ‘paste’ sites for stolen credentials and watch for reports of large scale data breaches. We collect the stolen credentials that have been publicly posted and check them to see if the stolen email and password combination matches the same email and password being used on Facebook. This is a completely automated process that doesn’t require us to know or store your actual Facebook password in an unhashed form. In other words, no one here has your plain text password. To check for matches, we take the email address and password and run them through the same code that we use to check your password at login time. If we find a match, we’ll notify you the next time you log in and guide you through a process to change your password.
As Facebook celebrates National Cyber Security Awareness Month, the company’s security team talked about how Facebook develops with security squarely in mind.
Benjamin Strahs, Security Infrastructure Engineer, recently served on a panel organized by Bloomberg Government in Washington, D.C., talking Internet security with representatives from the Department of Homeland Security, Google and Microsoft.
He wrote a blog post Monday detailing Facebook’s mission with regard to security:
Security is core to everything we do at Facebook, and we believe everyone at the company plays a role in keeping our platform safe. Building a security-aware culture means understanding that a security vulnerability popping up in HR could be just as serious as one in our back-end systems. We’re currently celebrating our annual tradition of Hacktober, our internal security awareness initiative that runs all month long and pulls together technical and non-technical teams across the company. Employees participate in trainings, talks, activities like movie nights, and drills that test them to identify suspicious behavior like stray USB keys and fake phishing emails. People who join in the fun walk away with special Hacktober t-shirts and other goodies. After running the program for four years, we’ve seen it take off across our global offices and drive participation in our security discussion groups throughout the rest of the year.
In the spirit of National Cyber Security Awareness Month, Facebook recently shared a video with some tips that users can take to ensure their profile is locked down.
The video below explains features such as login approvals and remote session management.
Facebook has come under fire for its research practices, with many people feeling that the company is tampering with users’ moods via News Feed experiments.
Facebook Chief Technology Officer Mike Schroepfer addressed this today, saying that Facebook is putting into effect a new framework that governs both internal work and research that might be published — starting with clearer guidelines for researchers:
In 2011, there were studies suggesting that when people saw positive posts from friends on Facebook, it made them feel bad. We thought it was important to look into this, to see if this assertion was valid and to see if there was anything we should change about Facebook. Earlier this year, our own research was published, indicating that people respond positively to positive posts from their friends.
Although this subject matter was important to research, we were unprepared for the reaction the paper received when it was published and have taken to heart the comments and criticism. It is clear now that there are things we should have done differently. For example, we should have considered other non-experimental ways to do this research. The research would also have benefited from more extensive review by a wider and more senior group of people. Last, in releasing the study, we failed to communicate clearly why and how we did it.
Facebook users love announcing to the world that they’ve checked in at Disneyland, uploading hashtag-filled selfies and writing public posts with a little too much information. On more than any other social platform, it seems that Facebook users are most willing to hand Mark Zuckerberg and company their intimate details, such as hometown, college, employer, who they’re dating and birthdate.
But when 4,000 U.S. users were asked if they trust Facebook with their personal data, the answer was a resounding, “No.”
A new study by online identity manager MyLife shows that 82.9 percent of those polled said they did not trust Facebook with their personal information.
Facebook is working to make the Internet a more secure place. The company announced Thursday at the USENIX Security Symposium in San Diego the creation of the Internet Defense Prize — an award recognizing superior quality research that combines a working prototype with great contributions to securing the Internet.
Facebook and USENIX crowned the first winners today. Johannes Dahse and Thorsten Holz, two researchers from Ruhr-Universität Bochum in Germany, were awarded $50,000 for their paper, “Static Detection of Second-Order Vulnerabilities in Web Applications.”
Though Facebook has moved to an HTTPS format, that doesn’t mean the site is completely safe. There’s a general attack on HTTPS-friendly sites called BREACH, which interacts with the technology that usually shields against a different attack called cross-site request forgery (CSRF).
CSRF is used against sites with user accounts, such as Facebook. According to Facebook, the attacker convinces the user’s browser to send plausible web requests to the target website. It’s masked as a common request, so it doesn’t raise any red flags within the browser.
If that works, then the attacker can pose as their victim, sending spam or stealing information.
Facebook detailed in a security blog post how the company protects against these kinds of attacks.
When Facebook sends out emails about notifications — such as a tagged photo or a friend request — it’s usually encrypted with plain text communication protocol STARTTLS, creating a more secure connection. The program has been around for 15 years, but Facebook heard it wasn’t widely deployed. The company wanted to test their own email systems to see how many notification emails were encrypted with STARTTLS.
Facebook found that 76 percent of unique MX hostnames that receive email notifications (which can be in the billions per day) support STARTTLS. Then 58 percent of notification emails are successfully encrypted. Certificate validation passes for roughly half of encrypted email and the other half is opportunistically encrypted. Facebook pointed out that 74 percent of hosts that support STARTTLS also provide Perfect Forward Secrecy.
Facebook CEO Mark Zuckerberg’s new mantra of “Move fast with stable infra,” might not be as sexy as “Move fast and break things,” but it reflects Facebook’s shift in ideology. Now that Facebook is 10 years old and a publicly traded company, it is past the risky startup stage and is in a position to give developers, advertisers and users more stability and security.
Moving away from breaking things, Facebook is putting more control over app permissions and login into the users’ hands. Zuckerberg announced at f8 that users will have more granular controls over what data is shared with apps. Additionally, users afraid of the “Login with Facebook,” button now have a way to sign into an app without sharing any Facebook information at all.
As Zuckerberg emphasized that Facebook is putting people first, he described the new controls:
Over the years, one of the things we’ve heard just over and over again is that people want more control over how they share their information, especially with apps, and they want more say and control over how apps use their data. … We take this really seriously. If people don’t have the tools they need to feel comfortable using your apps, then that’s bad for them and it’s bad for you. It will prevent people from having good personalized experiences and trying out new things, but it also might hurt you and prevent you from getting some new potential customers.
The new Facebook Login flow should be available in the coming weeks, while anonymous login is in beta with a few developers with a wider rollout planned in the next few months.
Facebook rewards white hat researchers who find errors and holes in the social network’s code, but don’t exploit them. In a look ahead at Facebook’s bug bounty program in 2014, Security Engineer Collin Greene examined what the program did in 2013.
Last year, Facebook received 14,763 submissions from researchers — a 246 percent increase from 2012. Of those submissions, 687 were valid and eligible to receive a reward. 6 percent of the eligible bugs were categorized as high severity, prompting a median response time from Facebook in about 6 hours.
Facebook paid out $1.5 million to 330 researchers around the world, with the average reward being $2,204. Most bugs were discovered in non-core properties, such as websites operated by companies acquired by Facebook.