Developing More Secure Applications

Try to think of the most secure application. What comes to mind? For me it is, a simple database with no permitted users, not connected to the internet, on a computer in a locked bank vault. This application has no practical functionality for 99% of use cases. How can we have a secure application, which is still practical, functional, and useable? According to Techopedia the Software Development Lifecycle (SDLC) is a “conceptual framework describing all activities in a software development project from planning to maintenance”. The SDLC is an overarching concept that covers both agile and DevOps, as development continues to adapt and evolve so does the SDLC. As DevOps shifts to DevSecOps, SDLC shifts to SSDLC or Secure Software Development Lifecycle. The SSDLC adds a security contribution to each phase of the SDLC; developers must now understand foundational security concepts so they can apply them during the development and testing of applications.

Security incidents come about in a variety of ways; human error is commonly stated as a leading cause. In addition to social engineering, human error can manifest as coding errors and application vulnerabilities. One cause of application vulnerabilities is that developers do not have security knowledge. Conversely Information Security teams have security knowledge, but do not develop applications. Many applications go through a security review before going to production, however this is not going to be able to catch all possible issues. Software architects should consider security as part of the system architecture, but this cannot cover all the possible application security issues. When a software architect designs an application with the requirement that sensitive data be encrypted, it falls on the developer to correctly implement this requirement which means the developer must know what is considered sensitive data. In developing an application, a developer may grant a wide array of permissions in order to facilitate testing and speedier development, if the developer does not sufficiently narrow these permissions before release it may not be caught until after the application has been compromised and the excessive permissions used maliciously.

We cannot expect developers to know as much about security as the Info Sec team. We can begin to create better applications by following secure coding practices. Some best practices, like the principle of least privilege, may be known but not understood, or simply considered the architects problem. It is not difficult to find guides for secure coding practices. Snyk, a company aimed at integrating security tools into development tools, has an article on the issue., another application security focused company, also has a guide on secure coding. One of the most well-known initiatives, the Open Worldwide Application Security Project (OWASP), not only produces a list of the top 10 most common web vulnerabilities but also provides a comprehensive quick reference guide for secure code.

All of these are good resources, and we don’t need to reinvent the wheel when it comes to secure coding. Developers already have a lot of requirements that need to be met, performance, useability, reliability, and maintainability are a few. Security must be added to the list of requirements to reduce the openings available to attackers. To develop secure applications developers need to familiarize themselves with at least the following concepts: Principle of least privilege, Proper Input validation, Proper configuration of application components, Using trusted third-party components, and appropriate logging levels.

The Principle of Least Privilege is probably one of the most well-known concepts of security. The principle states that a subject should only be granted the permissions and privileges needed to perform their responsibilities. A subject can be a person or application. Applying the principle of least privilege to an application reading inventory means that the application does not have write permission to the inventory. It also means that the developer should not have write access. This principle can be applied to the users of your application as well; only grant the permissions needed based on how they will need to interact with your application, not everyone needs administrative permissions. Proper Input Validation, or don’t trust user input, is gaining more recognition. One of the most common attack types on web applications is SQL injection, there is even an XKCD on the topic. This XCKD effectively highlights when inputs are directly passed to a database or other system the user can craft input in ways that have unintended consequences. In order to mitigate this, any data from an untrusted source, like the user, should be sanitized so that any characters with a special effect, such as quotations which can terminate a string, are properly encoded to limit unintended behavior.

It is increasingly common to leverage other components when developing applications; this helps reduce the level of effort and complexity. It is necessary to fully understand these components and how to set them up. An improperly configured component opens new avenues for attackers to gain control of your application. One common improper configuration is leaving the default login information. Changing default login information is a low effort but high impact change that increases security. Other configurations such as how the component will communicate with your application and through what format are important to know to reduce misconfigurations that lead to vulnerabilities. Using components, especially third-party components, means you are using someone else’s code in your application. More companies and applications are requiring a Software Bill of Materials (SBOM), to compile this you will need to track components and their versions. More importantly though tracking components source and version enables you to ensure you aren’t using other libraries or code that have known vulnerabilities that can be used to compromise your application. Any interaction with external components in your application should have sufficient logging around it so a developer can debug the interaction. Logging helps developers and operations understand the state of the application during failure and trace errors. Developing and debugging applications is easier when the state of the application is easily understood, this is one of the reasons that a “debug” log level exists. An application in production needs different logging than it does when still in development. Using the same level of DEBUG logging can have the unintended consequence of providing information to an attacker that they then use to force an error or compromise your application. Developers, operations, and security teams rely on logs to understand the state of an application and detect potential attackers. At the same time information in logs may provide an attacker information needed to compromise the application. Understanding the factors at play when determining what to log allows developers to enable internal teams while limiting an attackers ability to use the log for information gathering.

Application security is not one and done. The concepts discussed here are a starting point to provide developers with a better understanding of security considerations. Answering the following questions can help developers discover new ways of thinking about the level of security needed for the applications they develop.

  • “What types of data is my application be handling?”

  • “Who are the users of my application?”

  • “Where is the application running?”

If you are interested in learning more there are a lot of great resources. The OWASP quick reference guide is a great starting point. Another resource I found particularly interesting is the free book Secure Programming HOWTO by David Wheeler, you can find a few different was of accessing the book at Applying the concepts covered here will allow developers to begin to build more secure applications and be prepared for the threats they face every day.