Let’s look at some best practices to help secure internal applications using a commonsense approach.
Since the pandemic, people are often no longer in offices. This poses a unique challenge to development teams that often relied on their applications only being only available inside the office network. How do we let our users access our application remotely? How secure do we need to be to avoid things like Access Control Lists and VPNs?
For most small- to mid-size teams, security is often an after-thought. Therefore, it’s still highly recommended that most applications remain only available to users via IP Address Access Control Lists or through a VPN like OpenSSL or WireGuard. These sorts of tools provide an important safety net that cover a good number of common exploits that can occur with in-house developed software, especially those with a large amount of Personally Identifiable Information (herein referred to as PII.) However, even if you are deployed behind a firewall or VPN, these best practices should also apply.
It should be noted that PII may not be the only thing at risk in a modern security incident. Ransomware is a threat that should be taken seriously as well. In a ransomware attack, your team may lose complete control of the system that is being attacked.
This means all data and software would be unable to function, and the hackers will demand a large sum of currency to provide a password to decrypt the contents of your systems (and can you really trust they’ll actually respond with a password after providing payment?) This can be extremely costly and come with an undetermined amount of downtime that can cripple your team. Many businesses don’t have the processes in place to avoid paying the ransom to bring systems back online.
We highly recommend most systems remain off of the public internet unless all advice in this post is followed strictly, and even then, a well-versed security architect should be taking a look at your systems to make specific recommendations.
Secure Internal Applications with Multi-factor Authentication
Ideally, the software would use both password and a secure token via One-Time Password app such as Microsoft Authenticator in order to login. As a fallback, SMS could be used, though be aware that SMS is not nearly as secure as a OTP application as SMS phone numbers can be spoofed, making them vulnerable to phishing attacks.
Cloud based services such as Office 365, Google Application Suite, and centralized SSO services such as Auth0 can come with requirements that multi-factor authentication be required by default for all users.
Source Code Level Security Audit
Ensure that every single page and API endpoint that contains PII is protected to require a logged in user token and that the appropriate roles are required for the user to receive a response. If the user is not logged in or has inappropriate roles, the result should be 403 Forbidden. Due to the nature of such an audit, it’s best practice to have this performed by an independent third-party on a regular basis; quarterly would be ideal. This is likely a time consuming and/or expensive process, but it should not be overlooked. Malicious users are often scanning for exposed URLs that bypass security restrictions to gain as much insights into the your environment as they can.
Regular Penetration Testing Helps Secure Internal Applications
Employ an independent third-party security firm to perform penetration testing against the production and non-production environments quarterly to ensure the deployed infrastructure doesn’t have any vulnerabilities. The list of known vulnerabilities changes regularly, so it’s important to test this on a recurring basis. Security is and has always been a cat/mouse race between hackers and security professionals. What works today may not work tomorrow.
Continuous Integration / Continuous Deployment
Access to the production environment should be restricted to situations where there are no other options due to a significant change in infrastructure. Anytime someone changes the environment a new vulnerability could be introduced, whether deliberately or unintentionally. To ensure production changes don’t impact security, changes to the code should be done in an automated way using a build pipeline (continuous integration) and through automated deployments.
Further, continuous integration allows for automated testing to occur against the software that is being built so issues are caught earlier in the lifecycle before bugs and potential security issues make it to production. There are even third-parties that can test your code for common security errors – more on that in a bit.
Additionally, a properly configured continuous integration cycle also requires a second developer to review code changes before they are merged as part of a Peer Review cycle (also known as a Pull Request.) This allows a second set of eyes to monitor for changes which may allow for purposeful or inadvertent changes which would allow for a new vulnerability to occur. Continuous integration should also require pull requests be linked to work items, so the reviewer can ensure that the changes in the pull request are targeted specifically at the feature change being addressed and anything done outside of that should be questioned and/or reverted.
Finally, continuous deployments should require a release manager to determine when and if code has been adequately reviewed & tested and is ready to be pushed to production, adding a third set of eyes to the release cadence. The continuous deployment process would then be fully automated in a programmatic way; changes to both the source code of the application and the database would be performed by the pipeline agent running on the involved servers all the way up to production. This reduces the likelihood of manual mistakes that can cause production outages or security issues.
Only high-level engineers should have access to production for a valid reason, and only while it is absolutely necessary. It is often best for yet a fourth person to be the only one able to grant access when it is needed – someone who at the high-level is responsible for security for the entire project. Access times should be logged in a central auditing database – so someone who is granted direct access to an environment has their username, the timestamp access was granted, the timestamp access was revoked, and the reason for access, logged somewhere for future investigation if an incident occurs.
Additionally, anytime someone accesses production manually, they should be monitored by a another person who is able to identify mistakes or potentially purposeful changes that can introduce security risks. Access to production should be revoked from any engineers as soon as the required manual changes have been made and deemed successful.
Secure Internal Applications with Static code Analysis
As part of a continuous integration pipeline, static code analysis should be used to detect common coding practices that can lead to hard-to-detect programming patterns which can allow for vulnerabilities. Cloud services like SonarQube can be used to detect these sort of programming practices early in the lifecycle, allowing developers to respond to automated feedback, and if implemented correctly, will automatically prevent potentially vulnerable code from making it to the production environment. Thus, the CI/CD pipeline should be configured with automatic gate checks that will prevent pull requests containing known vulnerabilities from being merged into the main branch at all, thus helping the team avoid common mistakes which would allow new security exploits to occur as the software changes over time.
Have a Dedicated Architect with Security Expertise
A development team that has a lot of responsibility with relatively little experience and expertise with regard to security can open your team up for danger. The amount of and kind of data they are responsible for can be a complex task even for a large team of people with a large amount of security and architecture training. Teams should always be augmented with the proper skillsets to ensure they are not making mistakes. They should be getting well-trained to be aware of common threats and best practices when building such a complex system.
An architect should be reviewing developer check ins, determining best practices, and helping coach the team to ensure only the highest quality code makes it through the deployment gate. A publicly exposed system needs several sets of eyes to ensure it is following best practices. Don’t let gaps in one developer’s knowledge create security holes for the entire project.
Engineer Training on How to Secure Internal Applications
The entire development & operations team should be trained regularly on security best practices and the latest threats. This training should also come with project time to implement what they learn along the way. Prioritize any tasks identified which would improve the security of the deployment infrastructure and the underlying software solutions. Security Architects should help the team identify and prioritize the biggest risks first, balancing that time with ongoing feature changes.
Platform and Operating System Updates
Maintain the latest updates to all platforms and operating systems involved in the day-to-day usage of the system. Security vulnerabilities are sometimes found and fixed several times a week in packages which are entirely outside of your team’s control. Ensure the development team is updating all packages and runtimes.
For instance, make sure the team is using the latest release of .NET, and that the latest security updates are applied. Ensure you’re using a Long-Term Support version and that it is still being supported whenever possible, and never knowingly continue using a development platform that has gone out of support. Ensure that all Nuget, Maven, or NPM, etc, packages are up-to-date. Ensure the team is performing these updates as often as possible. Also ensure the relevant operating systems are up to date.
If using docker, ensure your system is getting rebuilt & redeployed with the latest base images on a regular basis. Of course, if you’re using Azure AppService or Azure Functions, there isn’t likely much to worry about on this one.
Maintain Access Control Lists or VPN-only Access for Lower Environments
Since few people need access to the lower environments for testing, it’s highly recommended that it be treated with heightened security since it can have temporary issues that could leak PII, keys, or tokens during interim software changes. This is especially true if the lower environments currently contain data from production backup images, since they combine production data with untested or not-well-tested code changes by the development team – in which case they should be treated with even more security than you’d treat production environments. Maintaining the access control lists or VPNs for lower environments is a great way to prevent access to these environments by any third parties outside of your engineering team.
Anonymize Lower-environment Test Data
As an additional security measure to the above it is better to prevent real PII from ever being used in lower environments. Therefore, use an anonymization/data scrubbing tool against the database backup before moving it to the lower environment. An anonymization process should find and scramble any data in any PII data columns in a non-reversible way, thus destroying the value of the database itself for hackers but maintaining its value for testing. Since changes in those environments haven’t been fully tested yet, it is important to ensure that interim threats don’t add to the possibility of PII leaking from lower environments.
Tools like Red Gate Data Catalog can help you identify PII in the database and categorize it, while tools like Red Gate Data Masker can help you anonymize and scrub the data before moving it to a lower environment for testing.
Data Encryption at Rest
Any PII data inside the database should also be encrypted on disk. Many relational database systems such as SQL Server or Azure SQL now support encrypted data at rest. These features should be used wherever possible. Combine this with tools like Red Gate Data Catalog to help identify the data that needs special treatment. That way disks, disk images, and database backups are still worthless to hackers.
Bring Your Own Key Data Encryption
If you are working in a multi-tenant environment, be sure to look into employing Azure SQL Transparent Data Encryption, which allows your customer to provide a key to use for their own database in your multi-tenant environment. This will offer your customer the ability to secure and isolate their data from other tenants in your cluster.
Cloud-native Threat Detection Helps Secure Internal Applications
Both AWS and Azure have services for monitoring for threat detection that are watching for world-wide global trends of security exploits active in the wild. Services like this work as a spam filter for your environment, constantly adapting to new security threats that are emerging in the real world. Azure Sentinel and Amazon GuardDuty are the services used in each respective cloud. If your application is deployed in the cloud, one of these services should be enabled in your cloud environment and configured to monitor your production and lower-level environments to alert you of and block emerging threats that could be occurring in real-time against each of your environments.
Social Engineering, Malware, Phishing, etc.
A software is only secure as its users. All of your users should take proper security training to know how to identify scams that might expose critical office data, both in your maintained application and other systems that may be targeted by malicious attacks. If you users don’t know how to avoid phishing, malware, and other such scams, your software will never be 100% secure. KnowBe4 is a training course that Clear Measure uses and recommends ensuring all staff can recognize threats before they are able to take advantage of users.
Incident response plan
Your team should have an incident response plan ready in case of a security incident. This plan should contain the following at minimum:
- Identify stakeholders (engineering, management, anyone who should be notified during/after an incident)
- Regular testing cadence (penetration testing, static code analysis, automated test cases, etc)
- Expertise engagement (engage with a team of security professionals or law enforcement that can help as needed)
- Conduct an analysis (review all data, look for forensic clues, identify any leaked information, determine impact)
- Establish root cause (identify point of entry and how entry was made and any mitigating factors)
- Continue to improve (ensure the root cause is satisfied and any other mitigation steps are taken to prevent recurrence)
Conclusion
Even with the best practices in place, security mechanisms can fail, mistakes can be made, hackers can become more clever, or third-party packages can have 0-day exploits that spread faster than updates can – so having an incident response plan in place will ensure the team knows what to do if a breech is found. Minimizing the number of ways external intruders can access the system is the best way to reduce the attack surface area that can cause data leakage, ransomware, or other attacks to occur. Therefore, Clear Measure highly recommends any software remain behind a firewall/ACL or VPN until all of the above solutions are in-place.
Further, these recommendations are pretty generic and non-exhaustive. Your own application may have further complications that this post didn’t account for. Be sure to consult with us or other security experts & architects to take a look at your deployed system to ensure all best practices are being honored.
We hope you found this post informational. This is just baseline of things your teams should consider doing. If there are specific challenges you and your team face, the architects and engineers here at Clear Measure would be happy to work with you. Reach out to us now to find out all the ways we can help you!
Credit
Thanks to our in-house Security Architect, Troy Vinson, for helping gather the research needed for this post!
Originally published October 26, 2021. Information refreshed May 11, 2022.