Closing the Gap: Exploring SCA Limitations and the Rise of Runtime Security

This is the second part of a blog series in which we compare Oligo’s security approach with existing security solutions.
In the previous blog, we conducted a comparison of RASP and Oligo.
Introduction
Software Composition Analysis (SCA) is an automated process that scans and identifies open-source components in an application's codebase. It is a well-established and widely used security tool that gained popularity during the late 1990s with the increasing usage of open-source software (OSS). The main objective of SCA is to identify and manage third-party software components and open-source libraries used in a software application. It involves scanning the application's dependencies and analyzing their licenses, vulnerabilities, and other metadata.
SCA, which solely relies on the application's source code, offers only a limited and static perspective of the application's structure and potential issues. This approach is not considering crucial runtime contextual information. The growing complexity of cloud applications, and the ever-expanding number of CVEs, result in a never-ending backlog of security issues with no effective prioritization. This situation places an additional burden on development and security teams, who must invest valuable time in manually prioritizing the extensive list of tickets, often encountering a significant number of irrelevant and false positive tickets.
In this article, we will cover SCA, its concepts and implementation details, its disadvantages, and why it’s unsuitable for modern and scaled environments.
ֿ
Table of Contents
- What is SCA?
- How Does it Work?
- Can Someone Reduce the Noise?! SCA Shortcomings
- Covering the Gap Through Runtime: Why You Should Scan It While You Run It
- Conclusions
What is SCA?
SCA products are used to identify and manage the software components and dependencies within a software application. It involves analyzing the composition of software packages, libraries, and frameworks that are utilized in a particular application or system.
SCA is primarily concerned with identifying and assessing the third-party or open-source components integrated into an application. It plays a crucial role in managing the dependencies and risks associated with third-party and open-source components, helping organizations maintain a secure and compliant software supply chain.
How Does it Work?
SCA products generally operate in the following manner: First, an engine scans the source code and identifies the artifacts used in the application. The engine then detects the open-source components and their versions, storing this information and creating a catalog of the open-source software components used in the scanned application. Next, the catalog of the application's open-source software (OSS) components is compared to databases that contain information on known security vulnerabilities for each component, licensing requirements, and more. Typically, this involves comparing against known security vulnerabilities (CVEs) tracked in the National Vulnerability Database (NVD) for security assessment purposes. The results, along with risk mitigation guidance and recommendations, are provided to the SCA users. The output usually includes a Software Bill of Materials (SBOM) that provides details on all the open-source components and their associated attributes used in the scanned applications.
Can Someone Reduce the Noise?! SCA Shortcomings
While SCA tools provide valuable insights into software application composition and security, they have some major disadvantages. Here are a few limitations and challenges associated with SCA tools:
Noise
Organizations are flooded with security tickets lacking contextual information and burdened with numerous false positives. More than 85% of the security tickets generated by SCA are inconsequential, either because the vulnerable code is not utilized by the application or is inaccessible within the application's runtime context. This high alert frequency, as stated in Gartner’s Magic Quadrant for Application Security Testing report, is resulting in a total waste of development and security teams' time.
False Positives and Negatives
SCA tools may generate false positives, indicating the presence of vulnerabilities or issues that are not relevant or exploitable in the specific application context. Conversely, they may miss certain vulnerabilities or issues, leading to false negatives. This can require manual verification and analysis to determine the true risk level of identified components.
Incomplete and Outdated Data
SCA tools rely on databases and repositories to gather information about software components, licenses, and vulnerabilities. However, these databases may not always have comprehensive coverage, leading to incomplete or missing data. Additionally, SCA tools may struggle to keep up with the constant stream of new software releases and vulnerabilities, resulting in outdated information.
Codebase is not Reflecting the Runtime
The codebase is not reflecting the runtime environment. One good example is nmp’s semantic versioning, where a developer can provide an unspecific version of a required package, that can be interpreted into several different versions. When using package manager’s features like semantic versioning, there is no guarantee of the exact version that will be used in runtime. Other examples of environmental measures that the codebase is not inevitably exposing are the host’s operating system, its version, and its configurations; external services and API; security policies and access controls such as authentication mechanisms, user permissions, encryption protocols, firewall configurations; and more.
SCA Reveals Only Known Vulnerabilities
As mentioned above, SCA operates by scanning the codebase and collecting the OSS libraries which exist within the application, along with their versions and relevant metadata. Next, the SCA will collect all the CVEs that exist within the artifacts it found in the application. This approach results in coverage of only known vulnerabilities that were listed properly. This approach leaves the users blind to vulnerabilities without CVE-IDs.
Limited Coverage
SCA tools produce a report only for the applications that it is configured for. Modern software development architectures, such as micro-services, are based on third-party components and containers that are being deployed alongside the application. SCA is not aware of those components, thus does not include them in the produced report and leaving blind spots in production servers.
Lack of Runtime Context: SCA Can’t Point Where Vulnerabilities Present
SCA tools are unable to understand the application's cloud context, architecture, and specific usage patterns, which can impact their ability to accurately assess security issues. These tools solely rely on insights from the application's codebase, lacking access to crucial runtime contextual information. This prevents them from determining important factors such as the application's cloud context, which refers to the application’s exposure to the internet, access to sensitive data, or privileged permissions within cloud environments. Additionally, SCA tools lack awareness of the execution phase of libraries, such as whether they are merely installed within the image, loaded into memory, or actively running.
The absence of answers to these critical questions hinders the effective assessment of risks and their prioritization according to modern security principles. As a result, teams often have to resort to using additional tools and correlating the outputs, or performing manual analysis to address these gaps.
Insufficient Prioritization
As mentioned earlier, SCA tools have a limited understanding of runtime intelligence since they are only integrated with the source code of the application. Consequently, this limitation leads to insufficient prioritization of vulnerabilities, as these tools are relying on static scoring systems like CVSS. Achieving genuine prioritization from the extensive list of tickets using only CVSS is inaccurate and leads to noise and false positives, as CVSS score helps describe the severity of an issue, and does not equal risk.
To overcome these limitations, organizations often rely on the expertise of software development teams and security professionals to interpret the SCA reports and make informed decisions on the actions to take. Manual review and analysis of the reports, combined with the organization's specific requirements and risk appetite, help in determining the appropriate steps to address the identified issues. We offer to address these limitations through runtime analysis.

Covering the Gap Through Runtime: Why You Should Scan It While You Run It
Runtime security approach effectively addresses the shortcomings of SCA by overseeing and providing comprehensive awareness of the application's behavior, cloud context, and libraries execution state in real-time. By integrating into the runtime environment and closely observing the application’s activity in real-time, it produces highly accurate findings, while contextual awareness ensures a clear understanding of the security posture and the real urgency of each risk. This approach effectively reduces the endless backlog of security issues and provides clear and understandable risk prioritization.
Oligo's innovative runtime detection mechanism employs a granular perspective by examining the behavior of each OSS component within the application, empowering application security with a zero-trust approach, to detect known and unknown threats. Leveraging eBPF technology, Oligo's integration with the runtime environment is efficient, frictionless, and secure by design. Moreover, unlike other technologies, eBPF provides comprehensive coverage for the entire host, leaving no blind spots in production.
Conclusions
SCA plays a crucial role in managing software composition and evaluating application security risks. However, SCA is blind to the application’s execution behavior and context. To overcome the limitations of SCA and establish a strong security strategy, it is essential to integrate runtime analysis into the organization’s security program.
Runtime analysis involves active observation of the application while it is running, enabling real-time threat detection and proactive mitigation. By performing runtime analysis, it becomes possible to address both static and dynamic aspects of security measures. For instance, Oligo collects comprehensive information about all the libraries utilized in the application and enriches the Software Bill of Materials (SBOM) and risks list with runtime insights. This approach cuts off the noise and facilitates accurate risk prioritization, making it practical and understandable for development and security teams. This results in a robust security strategy that covers the full spectrum of static and dynamic elements in application security.
By adopting Oligo’s runtime security product, organizations gain immediate visibility into the application's behavior, empowering them to proactively identify and mitigate security issues. Leveraging eBPF technology further enhances this process, ensuring seamless, frictionless, efficient, and secured runtime analysis by design.




