Photo by Growtika on Unsplash

Lessons Learned from a Critical Data Breach: A Case Study

Fan Pier Labs

--

Lessons Learned from a Critical Data Breach: A Case Study

In today’s digital age, ensuring robust cybersecurity and proper operational procedures is crucial for any business. This case study highlights how one small business overcame a significant data breach and transformed its technical infrastructure and policies for better security and efficiency.

The Situation

The client is a small business that builds web tools for other businesses.

Lacking in-house software engineers, the client hires engineers remotely through platforms like Upwork due to the lower cost of international engineers compared to those in the United States. They had active contracts with 2–4 remote engineers, each working on different aspects of the site.

The web application includes a MongoDB database and a backend application, both running on an EC2 instance in AWS. The client had only taken a few manual backups of the EBS volume attached to the EC2 instance and did not have an automated backup system in place. Additionally, there were several integrations with third-party services such as Twilio.

The Breach

In early May, the client experienced a severe data breach: the MongoDB database was deleted and replaced with a ransom note demanding ~$500 in Bitcoin. The client’s last manual backup was from November, seven months ago, and lacked the latest user data. Consequently, the site was taken offline. The client suspected a recently dismissed engineer might be behind this.

All your data is a backed up. You must pay 0.043 BTC to 1Kz6v4B5CawcnL8jrUvHsvzQv5Yq4fbsSv 48 hours for recover it. After 48 hours expiration we will leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com or https://buy.moonpay.io/ After paying write to me in the mail with your DB IP: _____@onionmail.org and you will receive a link to download your database dump.

Recovery

Initially, things looked bleak. The database was wiped, there were no recent backups, the site was down, and nobody had a clear solution. It seemed our best option was to restore the database from the backup made several months prior. However, this would have caused a major interruption for the client’s real customers.

We began discussing possible solutions with the engineering team. Fortunately, one of the engineers had made a backup of the database just a few days before the incident. Relieved, we immediately verified the backup and restored it to production.

Proper Procedure

After the site was restored, we began working with the client to implement proper access control policies and a solid technical infrastructure. Specifically, we recommended they:

Migrate to AWS’s DocumentDB from a self-managed MongoDB instance

  • Automatic Backups: DocumentDB provides automated backups, reducing the risk of data loss and ensuring that recent data is always recoverable.
  • Managed Service: This shift reduces the burden of database maintenance and management, allowing the client to focus on building great products.

Significantly trim down AWS access

  • Remove all remote engineers’ AWS Admin Access: Limiting administrative access minimizes the risk of unauthorized changes and enhances overall security.
  • Provide remote engineers with limited SSH and DB access for development and deployment: This ensures that engineers can still perform their tasks without having excessive permissions.
  • Implement IP-address restrictions on SSH and DB access: Restricting access to specific IP addresses helps prevent unauthorized access from untrusted locations.

Set up a staging environment and restrict production access

  • Staging Environment: Having a separate staging environment allows for testing and development without affecting the live production environment.
  • Controlled Production Access: Restricting production access ensures that only trusted and authorized personnel can deploy new code, reducing the risk of accidental or malicious changes.

Remove personnel access before terminating their work contract

  • This policy ensures that former employees or contractors cannot access company systems, reducing the risk of sabotage or data theft.

Set up GitHub branch protection

  • This prevents unauthorized or accidental changes to critical branches, ensuring code integrity and stability.

Revise the interview process for potential engineers

  • Improving the interview process helps in selecting more qualified and trustworthy engineers, reducing the risk of hiring individuals who might pose a security threat.

Improve how API keys are handled for calls to third-party integrations

  • Properly managing API keys reduces the risk of unauthorized access to third-party services, ensuring data security and compliance.

Explore the budget for hiring more experienced engineers

  • Investing in more experienced engineers can lead to better code quality, more efficient problem-solving, and enhanced security practices.

Implement other technical and process changes as needed

  • Regularly updating and refining technical and operational processes ensures the company stays ahead of potential security threats and operational inefficiencies.

Conclusion

This experience was a wake-up call for the client and underscored the importance of having robust security measures and operational procedures in place. By taking these steps, the client not only recovered from the breach but also fortified their infrastructure against future threats, ensuring a more secure and efficient operation moving forward.

--

--

Fan Pier Labs

Helping startups with Web Development, AWS Infrastructure, and Machine Learning since 2019.