In today’s highly regulated technology landscape, the Software Development Life Cycle (SDLC) must be executed with rigor, discipline, and with increasing focus on compliance. Organizations are increasingly required to demonstrate adherence to governance frameworks, regulatory standards, and internal risk management policies.
To meet these expectations, SDLC processes must incorporate robust controls that not only ensure product quality and security but also provide clear evidence of compliance. However, many teams struggle with identifying which risks are most critical and how best to mitigate them consistently across projects.
This project aims to present a curated set of common risks encountered during the SDLC, along with their corresponding mitigations. By establishing a standardized library of controls, the goal is to streamline compliance efforts, reduce uncertainty, and promote safe, repeatable development practices aligned with regulatory expectations.
Search & Filter
Risk Catalogue
Identify potential risks in your AI implementation across operational, security, and regulatory dimensions.
Security
1 risksUnauthorised Change
Unauthorised changes to source code, configuration, or build artefacts represent a significant risk to the integrity of software systems. Such changes may be introduced deliberately by malicious actors or inadvertently through inadequate controls, and can result in the deployment of compromised or untested software into production environments.DescriptionUnauthorised change encompasses any modification to software, configuration, or related artefacts that has not been subject to the appropriate review, approval, and tracking processes. This includes changes made directly to production systems, alterations to source code outside of controlled workflows, tampering with build pipelines or deployment artefacts, and modifications to configuration files that bypass formal change management processes.Unlike insider threat, which focuses on the actor, unauthorised change focuses on the act itself — the introduction of a change that lacks a verifiable, auditable chain of custody. The risk may materialise through a variety of vectors: Direct repository tampering — modifying source code or history in a version control system without going through a controlled workflow Build pipeline injection — altering build scripts, dependencies, or artefacts between the point of development and deployment Configuration modification — changing application or infrastructure configuration outside of version-controlled processes Credential compromise — an external attacker using stolen credentials to introduce changes while masquerading as a legitimate committerThe consequences of unauthorised change can be severe: malicious code may reach production undetected, compliance audit trails may be broken, and the organisation may be unable to demonstrate the integrity of its software at any given point in time.Consequences Integrity compromise — Untrusted or malicious code may be deployed into production systems, potentially affecting customers, counterparties, or financial markets. Regulatory breach — Inability to demonstrate controlled change management processes may constitute a breach of FFIEC, SOX, or PCI DSS requirements. Audit trail failure — Without a verifiable record of who changed what and when, forensic investigation following an incident becomes significantly harder. Operational disruption — Unreviewed changes introduce a higher probability of defects, misconfigurations, or incompatibilities reaching production.Links
Regulatory and Compliance
1 risksInsider Threat
Authorized personnel (developers, contractors, administrators, or other trusted users) with legitimate access to source code repositories, development environments, production systems, or sensitive data may intentionally or unintentionally compromise the confidentiality, integrity, or availability of software assets.DescriptionInsider Threat includes risks of malicious code injection, unauthorized data exfiltration, credential misuse, sabotage of build/deployment pipelines, or negligent security practices that expose systems to exploitation. Additionally, insider threat encompasses scenarios where external attackers have compromised the credentials of legitimate users, enabling them to conduct attacks while masquerading as trusted personnel with valid access. The trusted position and technical knowledge of insiders—or attackers leveraging insider credentials—makes detection difficult and potential impact significant. Malicious code injection - Inserting backdoors, vulnerabilities, or malicious logic into applications or infrastructure Compromised credentials - Attackers using stolen or phished developer/admin credentials to access systems and data Data exfiltration - Stealing source code, intellectual property, customer data, or sensitive business information CI/CD pipeline manipulation - Tampering with build processes, deployment pipelines, or supply chain components to inject malicious code Cloud/infrastructure misconfiguration - Accidentally or intentionally exposing databases, storage, or services to unauthorized accessConsequencesThe consequences of an insider threat materializing for a financial institution can be severe: Direct Financial Losses: Fraudulent transactions, theft of funds, unauthorized wire transfers, or manipulation of accounts can result in immediate monetary losses to the institution and its customers. Breach of Data Privacy Regulations: Unauthorized access to or exfiltration of customer PII can lead to significant fines under regulations like GDPR, CCPA, and GLBA, alongside mandated breach notifications and regulatory scrutiny. Violation of Financial Regulations: Insider actions compromising system integrity, audit trails, or customer data can breach banking regulations (e.g., SOX, PCI DSS, Basel III) and trigger enforcement actions from regulatory bodies. Reputational Damage: Public disclosure of insider attacks—particularly those involving customer funds or data—can severely erode customer trust, leading to account closures, deposit flight, and long-term brand damage. Operational Disruption: Sabotage of critical banking systems, payment processing infrastructure, or core applications can halt operations, impacting customer service and transaction processing capabilities. Loss of Competitive Advantage: Theft of proprietary trading algorithms, risk models, customer insights, or strategic plans can benefit competitors and undermine market position. Legal Liabilities: The institution may face lawsuits from affected customers, shareholders, or partners, as well as potential criminal investigations if insider actions involve fraud or data breaches. Does this capture the right scope and tone for your risk register?Links FFIEC IT Handbook Scalable Extraction of Training Data from (Production) Language Models
Mitigation Catalogue
Discover preventative and detective controls to mitigate identified risks in your AI systems.
Preventative
12 mitigationsContent Addressable Identities
Software artifacts are identified by cryptographic hashes of their contents, ensuring that any change to an artifact produces a different identity and making tampering immediately detectable.DescriptionHigh-security environments require a tamper-proof identity scheme for software artifacts (e.g. compiled binaries, container images, JAR files, npm packages, Helm charts, configuration bundles). Content addressable identification uses cryptographic hashing (e.g., SHA-256) to derive an artifact’s identity directly from its contents. Unlike mutable labels such as version tags or filenames, these identities are immutable: even a single-byte change produces a completely different hash. This provides the foundation for verifiable integrity of binaries, containers, packages, and other deliverables throughout the software development lifecycle.Human-readable identifiers (semantic versions, branch names, commit references) remain useful for navigation and convenience but must never be relied upon for security or compliance verification.Requirements Every software artifact MUST be identified by a cryptographic hash (SHA-256 or stronger) of its contents Any modification to an artifact MUST produce a completely different identity Content addressable identities MUST be immutable and cannot be forged or reassigned to different content All systems that store, transfer, or deploy artifacts MUST reference them by their cryptographic identityExamples & Commentary Generate SHA-256 hashes for all build outputs including container images, compiled binaries, archives, and packages Configure container registries and artifact repositories to use content-addressable storage (e.g., Docker content trust, OCI image digests) CI/CD pipelines should propagate cryptographic identities rather than mutable tags when referencing artifacts across stages Implement verification checks at deployment boundaries that confirm the cryptographic identity of an artifact before allowing it to proceed Store artifact hashes in a secure, append-only record that serves as the source of truth for artifact identity
Software Artifact Provenance
Internally built software artifacts have known and verifiable provenance, establishing a documented chain of custody from source code commit through build and into deployment.DescriptionSoftware artifact provenance answers the question “where did this artifact come from?” for any artifact at any time. Provenance records are created at build time and capture the source commit, repository state, build environment details, build log references, and the resulting artifact identity. These records are immutable once written and stored in a tamper-evident system that prevents retroactive modification.By linking the cryptographic identity of an artifact (from content addressable identities) to its source and build metadata, software artifact provenance ensures that only artifacts built from known source code through an authorised build process.Requirements Each artifact MUST be traceable back to a specific source code commit, build process, and build environment Provenance records MUST be created at build time and be immutable once written Provenance records MUST include: Cryptographic hash of the output artifact Source commit reference Sufficient build environment context to support traceability, for example: Build system URL Build log reference Builder identity Build tool versions Timestamp Examples & Commentary No artifact should be deployed to production without a corresponding provenance record Use a dedicated provenance store or attestation service to maintain build records independently of the CI/CD system Implement deployment gates that verify an artifact has a valid provenance record before allowing promotion to production environments Periodically audit provenance records against running deployments to confirm that all production artifacts have known origins Distinguish between human-friendly identifiers (semantic versioning, commit references) for navigation and cryptographic hashes for security and compliance purposes Where human-readable identifiers are used, they should ideally be mapped to their corresponding cryptographic identities in an immutable way, such that the mapping itself cannot be altered or reassigned after the fact
Requirements Management
Ensures requirements are maintained in an approved repository and that its correct usage is periodically attested.Map to related risks• Risk 1: Ungoverned or inconsistent requirements repositories• Risk 2: Requirements progressing without minimum readinessDescriptionThis control establishes a single, governed source of truth for requirements and requires periodic confirmation that the repository is actively used and that requirements meet agreed readiness criteria.Requirements (Expectations)• An approved requirements repository is designated for the application• Usage of the repository is periodically attested• Requirements meet an agreed Definition of Ready before developmentExamplesThe control supports a “shift left” approach by embedding quality and clarity at the point requirements enter development, rather than relying on downstream inspection.Links to external standards for controls• ISO/IEC 27001 – information integrity and governance• IREB / IIBA requirements management standardsControl 2: Requirements for Release – Reviewed and Agreed (XXXX-01068)
Vulnerability Scanning - SAST
Static Application Security Testing (SAST) analyses application source code, bytecode, or binaries to identify security vulnerabilities, coding flaws, and insecure patterns before software is deployed.DescriptionStatic Application Security Testing (SAST) examines an application’s source code or compiled artefacts without executing the program. By analysing code paths, data flows, and control flows, SAST tools detect categories of vulnerability such as injection flaws, buffer overflows, hard-coded secrets, insecure cryptographic usage, and race conditions. Because SAST operates on the code itself, it can identify issues very early in the development lifecycle — often at the point a developer opens a pull request — making remediation faster and cheaper than finding defects in later stages. In regulated financial services environments, embedding SAST into the CI/CD pipeline provides continuous assurance that code meets secure coding standards and supports audit evidence of proactive security testing.Requirements SAST MUST be performed against application source code as part of the software development lifecycle SAST MUST be automated — manual-only code analysis is not sufficient as a primary control The point(s) at which SAST scans are executed MUST be defined based on development practices, risk profile, and pipeline architecture SAST scans MUST cover all languages and frameworks in active use across the organisation’s application portfolio Findings MUST be classified by severity (e.g., Critical, High, Medium, Low) using an industry-recognised standard such as CWE or CVSS A baseline ruleset MUST be maintained, aligned with recognised secure coding standards (e.g., OWASP Top 10, CWE Top 25, CERT Secure Coding Standards), and reviewed at least annually False positives MUST be triaged, documented, and suppressed through an auditable process — blanket suppression of finding categories is not permitted SAST results, including findings, remediation actions, and exceptions, MUST be retained in accordance with the organisation’s record retention policyExamples & Commentary Pull Request Gating: Configure SAST to run as a required status check on pull requests. Developers receive immediate feedback on security issues in their own code before review, reducing the burden on reviewers and shortening remediation cycles Incremental vs Full Scans: Use incremental (diff-only) scans on pull requests for fast feedback, complemented by periodic full repository scans to catch issues that span multiple files or emerge from cumulative changes Custom Rules: Supplement vendor-provided rulesets with organisation-specific rules that encode internal security requirements (e.g., prohibiting use of deprecated internal libraries, enforcing approved cryptographic primitives) Developer Enablement: Provide IDE plugins or pre-commit hooks so developers can catch common issues locally before pushing code, shifting security further left Metrics & Reporting: Track mean-time-to-remediate, finding density per repository, and false positive rates to measure programme effectiveness and identify teams or codebases that need additional supportLinks OWASP Source Code Analysis Tools OWASP Top 10 CWE Top 25 Most Dangerous Software Weaknesses NIST SP 800-53r5 SA-11: Developer Testing and Evaluation CERT Secure Coding Standards FFIEC IT Handbook - Information Security
Vulnerability Scanning - DAST
Dynamic Application Security Testing (DAST) tests running applications by simulating real-world attacks against exposed interfaces to identify exploitable vulnerabilities that cannot be detected through source code analysis alone.DescriptionDynamic Application Security Testing (DAST) analyses applications in their running state by sending crafted requests to exposed endpoints and observing the responses. Unlike SAST, which examines source code, DAST tests the application as an attacker would — through its HTTP interfaces, APIs, and authentication flows — making it effective at finding runtime and configuration vulnerabilities such as injection flaws, broken authentication, security misconfigurations, and sensitive data exposure. DAST is language- and framework-agnostic because it operates against the deployed artefact rather than the code. In regulated financial services environments, DAST provides evidence that applications have been tested against real attack scenarios prior to production release and on an ongoing basis thereafter.Requirements DAST MUST be performed against web applications and APIs as part of the software development lifecycle DAST MUST be automated — manual-only dynamic testing is not sufficient as a primary control The frequency and trigger points for DAST scans MUST be defined based on release cadence, application risk profile, and deployment model DAST tooling MUST be configured to test against the OWASP Top 10 vulnerability categories at a minimum Authenticated scanning MUST be performed where applicable to test functionality behind login flows, ensuring coverage of authorisation and session management vulnerabilities API-specific scanning MUST be performed for all REST, GraphQL, and other API endpoints, using OpenAPI specifications or equivalent API definitions where available DAST scans MUST be executed in environments that do not compromise production data integrity — production-equivalent staging environments are preferred DAST results, including findings, remediation actions, and exceptions, MUST be retained in accordance with the organisation’s record retention policyExamples & Commentary Pipeline Integration: Trigger DAST scans automatically after deployment to a staging environment as part of the CI/CD pipeline. Gate promotion to production on scan results Authenticated vs Unauthenticated Scans: Run both unauthenticated scans (simulating an external attacker) and authenticated scans (simulating a logged-in user) to maximise coverage of the application’s attack surface API Contract Testing: Feed OpenAPI/Swagger specifications into DAST tooling to ensure all documented endpoints are tested, including edge cases such as malformed inputs, missing authentication headers, and parameter tampering Crawl Depth & Coverage: Configure scan policies to adequately crawl modern single-page applications (SPAs) and JavaScript-heavy frontends, which may require headless browser-based scanning capabilities Environment Considerations: DAST generates real traffic and may modify application state. Use dedicated staging environments with synthetic data, and ensure scans do not trigger denial-of-service conditions or corrupt shared test data Complementing SAST: DAST and SAST are complementary — SAST finds issues in code that may not be reachable at runtime, while DAST finds runtime and configuration issues invisible to static analysis. Both should be used together for defence in depthLinks OWASP DAST Tools OWASP Top 10 OWASP Web Security Testing Guide OWASP API Security Top 10 NIST SP 800-53r5 CA-8: Penetration Testing FFIEC IT Handbook - Information Security
Vulnerability Scanning - Dependencies
Dependency and vulnerability scanning identifies known vulnerabilities (CVEs) in third-party dependencies, libraries, container images, and other software components before they are deployed to production.DescriptionModern software relies heavily on open-source and third-party components, which may contain publicly disclosed vulnerabilities (CVEs) or be subject to supply chain compromise. Dependency and vulnerability scanning provides automated, continuous analysis of all software components to detect known security flaws, outdated packages, and license compliance issues. In regulated financial services environments, unpatched vulnerabilities in production systems represent a material operational and compliance risk. Dependency scanning maintains visibility into the software supply chain, enabling rapid response when new vulnerabilities are disclosed.Requirements Application repositories MUST be scanned for known vulnerabilities in direct and transitive dependencies as part of the software development lifecycle Dependency scanning MUST be automated — manual-only dependency review is not sufficient as a primary control The point(s) and frequency at which dependency scans are executed MUST be defined based on development practices, risk profile, and pipeline architecture Container images MUST be scanned for operating system and application-level vulnerabilities as part of the build and deployment process Scanning tools MUST be configured to check for vulnerabilities in all relevant ecosystems used by the organisation (e.g., npm, PyPI, Maven, NuGet, Go modules, system packages) Vulnerability scanning results MUST be classified by severity and tracked to resolution A documented exception and risk-acceptance process MUST exist for vulnerabilities that cannot be immediately remediated, with appropriate approvals and time-bound waivers Audit logs of all scan results, remediation actions, and exceptions MUST be retained in accordance with the organisation’s record retention policyExamples & Commentary CI/CD Integration: Configure Software Composition Analysis (SCA) tooling to run on every pull request. Builds targeting production branches should fail if critical or high severity CVEs are detected without an approved exception Scheduled Scanning: Even without code changes, new CVEs are disclosed daily. Recurring scans of the default branch and deployed artefacts ensure newly published vulnerabilities are detected against existing codebases Container Scanning: Scan base images and final application images in the container registry. Alert on images with known vulnerabilities and enforce policies that prevent deployment of non-compliant images Dependency Pinning & Lock Files: Ensure all projects use lock files (e.g., package-lock.json, go.sum, Pipfile.lock) to guarantee reproducible builds and prevent silent dependency changes Triage & Prioritisation: Not all CVEs are equally exploitable. Organisations should consider reachability analysis and environmental context when prioritising remediation efforts, but must still track all findings to closure Complementing Component Inventory: This control works alongside Component Inventory (mi-9), which provides the SBOM. Dependency scanning uses the inventory to detect which components are affected when new vulnerabilities are disclosedLinks NIST SP 800-53r5 RA-5: Vulnerability Monitoring and Scanning OWASP Dependency-Check CycloneDX SBOM Standard SPDX SBOM Standard NIST Cybersecurity Supply Chain Risk Management (C-SCRM) FFIEC IT Handbook - Information Security
Version Control
Software and configuration must be stored winth an approved version control system.DescriptionVersion control is required to maintain a history of all software and configuration changes across all releases. It provides traceability of who made changes and when, establishing the provenance of software over time.The control establishes verifiable evidence of the state of source code or configuration at the time a release was created. This is important not only for identifying who made changes, but also for understanding what functionality existed at a given point in time — enabling diagnosis of software behaviour weeks, months, or even years after it was originally built, tested, and deployed.Version control is also integral to ensuring that the software being built and tested is the same software being deployed, by tracking the specific commit or version and providing a reference for traceability.RequirementsIn order for a version control system to be effective, it must provide the following properties when used for storing software and configuration. Immutable History — The version control system must preserve the integrity of history to prevent tampering and maintain the audit trail. Verified Committers — The version control system must record the author of each change. Techniques such as commit signing could be employed to demonstrate this. Retention — History must be maintained for the lifetime of the software or in alignment with applicable retention requirements. Access Control — The version control system must support access control and should be configured appropriately.ExamplesGit is the most widely adopted distributed version control system and is the de facto standard for software development in regulated and non-regulated environments alike. Immutable History — Every commit is identified by a cryptographic SHA hash derived from its content and its parent commits, making silent tampering detectable. Verified Committers — Git supports GPG and SSH commit signing natively. Hosting platforms such as GitHub and GitLab can be configured to require signed commits on protected branches. Access Control — Enforced at the hosting platform level via branch protection rules, required reviewers, and role-based repository permissions.Subversion (SVN) is a centralised version control system still found in some legacy financial services environments. Immutable History — SVN maintains a sequential, server-side revision history. Once committed, revisions cannot be altered without administrator access to the repository backend. Verified Committers — SVN records the authenticated username against each commit. Commit signing is not natively supported; identity relies on server authentication (e.g. LDAP, Kerberos). Access Control — Controlled server-side via path-based authorisation, allowing fine-grained read/write permissions per directory or branch.Links About commit signature verification — GitHub Signing commits — GitHub
Component Inventory
Component inventory provides visibility and traceability and enables fast response by maintaining an accurate, machine-readable record of what is actually shipped in each artifact.DescriptionComponent inventory is the practice of identifying and cataloguing all third-party components (libraries, packages, modules) that are included in a software artifact — i.e. what is actually shipped. This includes direct and transitive dependencies. The output is typically a Software Bill of Materials (SBOM) in a standard format such as SPDX or CycloneDX.The inventory provides visibility and traceability and enables fast response: you know what you ship (visibility), you can trace components to artifacts (traceability), and when a vulnerability is disclosed, a license concern arises, or another issue surfaces, you can query the inventory to see which artifacts are affected and respond quickly. Additional analysis (e.g. vulnerability scanning, license compliance) adds further value by enabling prevention before issues reach production. Extraction from the actual artifact (container image, binary, filesystem) rather than from declarations alone enables detection of discrepancies between what is declared and what is present.Requirements The output MUST be a structured, machine-readable SBOM (e.g. SPDX or CycloneDX). The inventory MUST include direct and transitive dependencies with version and provenance information where available. The SBOM MUST be produced for each releasable artifact.Examples & Commentary No releasable artifact should be deployed without a corresponding component inventory (SBOM); this complements Software Artifact Provenance (mi-3), which establishes build-level provenance, by providing component-level visibility and traceability. Automated extraction from the artifact plus a machine-readable manifest (SBOM) together provide full coverage. Extraction (e.g. automated binary scan) can be compared to declared dependencies to identify discrepancies; the manifest records what is in each artifact so you can query it when issues arise (e.g. which artifacts contain a given component). When a new CVE is published (e.g. Log4Shell), query the inventory across all artifacts to identify which contain the affected component and version; without an inventory, manual inspection or waiting for a scan would be required — with SBOMs, remediation can be prioritised and executed within hours. Use the inventory for license compliance: an SBOM listing components with their licenses allows automated checks before deployment; any artifact containing a prohibited license can be flagged or blocked. Implement deployment gates that verify an artifact has a valid SBOM before allowing promotion to production; the SBOM can be stored alongside provenance records (mi-3) or in a dedicated SBOM store. Tools exist to extract components from container images or filesystems, or to generate SBOMs from the build; the SBOM is the input for vulnerability scanning and license compliance checks. The SBOM MAY include recommended elements (e.g. component name, version, PURL, CPE, license info, file hashes) to support faster response and downstream analysis.Links NTIA Minimum Elements for an SBOM OWASP SCVS V2: SBOM Requirements CycloneDX Specification SPDX Specification CISA SBOM Guide CRA Regulation (EU) 2024/2847 (Annex I, vulnerability handling) NIST SSDF SP 800-218 (PS.3.2, RV.1.2)
Secret Detection
Secret detection identifies hardcoded credentials, API keys, tokens, and passwords in source code, configuration files, and version control history, preventing credential exposure and unauthorised access.DescriptionSecrets — such as API keys, database passwords, private keys, tokens, and service account credentials — are frequently committed to source code repositories by accident. Once a secret is pushed to a repository, it may be exposed to anyone with access to the codebase, and if the repository is public or is later compromised, the secret can be exploited by malicious actors. Even in private repositories, secrets in version control history persist indefinitely unless explicitly purged. Secret detection tooling scans source code, configuration files, environment definitions, and commit history to identify credentials before they are merged or as soon as possible after exposure. In regulated financial services environments, exposed credentials represent a direct and material security risk, and organisations must demonstrate proactive controls to prevent and respond to credential leakage.Requirements Secret detection MUST be performed against source code and configuration as part of the software development lifecycle Secret detection MUST be automated — manual-only review is not sufficient as a primary control The point(s) in the development workflow at which secret detection scans are executed MUST be defined based on development practices, risk profile, and pipeline architecture Pre-commit hooks or equivalent client-side checks SHOULD be provided to developers to catch secrets before they are pushed to the remote repository Scanning MUST cover all file types and configuration formats in the repository, including infrastructure-as-code templates, CI/CD pipeline definitions, and environment files Detection rulesets MUST cover, at minimum, common secret types: API keys, private keys, database connection strings, OAuth tokens, cloud provider credentials, and service account keys When a secret is detected, the affected credential MUST be rotated and the secret removed from the codebase. A response process for detected secrets MUST be defined and enforced, appropriate to the organisation’s risk posture When a previously committed secret is discovered in version control history, the affected credential MUST be rotated immediately and the secret SHOULD be purged from history where feasible An allowlist/exclusion process for false positives MUST be maintained, with documented justification and periodic review Secret detection results MUST be retained in accordance with the organisation’s record retention policyExamples & Commentary Pre-commit Prevention: Deploy pre-commit hooks (e.g., using frameworks like pre-commit with secret detection plugins) so that secrets are caught on the developer’s machine before they ever reach the repository. This is the most effective point of intervention CI Pipeline Scanning: Run secret detection as a required status check on every pull request. If a secret is found, fail the check and provide clear guidance to the developer on how to remediate (remove the secret, rotate the credential, use a secrets manager) Historical Scanning: Periodically scan the full git history of repositories, not just the latest commit. Secrets committed months ago and subsequently deleted from the working tree may still be present in history and exploitable Custom Patterns: Supplement default detection rules with organisation-specific patterns for internal credential formats, proprietary API key prefixes, or internal service tokens Secrets Management Integration: Pair this control with a secrets management solution (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) and enforce that applications retrieve secrets at runtime rather than embedding them in code or configuration Incident Response: Define a clear process for when a secret is found: who is notified, what is the rotation timeline, and how is the exposure assessed. Treat any secret found in a public repository as compromised and rotate immediatelyLinks OWASP Secrets Management Cheat Sheet NIST SP 800-53r5 IA-5: Authenticator Management CWE-798: Use of Hard-coded Credentials CWE-312: Cleartext Storage of Sensitive Information FFIEC IT Handbook - Information Security
Vulnerability Remediation SLAs
Vulnerability Remediation SLAs establish severity-based timelines for addressing security findings, ensuring that vulnerabilities are tracked to resolution within agreed timeframes.DescriptionIdentifying vulnerabilities is only effective if findings are remediated in a timely manner. Without defined remediation timelines, vulnerability backlogs grow indefinitely, and critical issues may persist in production for months. Vulnerability Remediation SLAs establish mandatory timelines for addressing findings based on their severity, create accountability for remediation, and provide a framework for exceptions when immediate remediation is not feasible. This control applies to findings regardless of how they were identified, providing a consistent remediation framework across the organisation.Requirements Remediation SLAs MUST be defined for security vulnerability findings, with timelines differentiated by severity level Remediation SLA timelines MUST be documented and approved, reflecting the organisation’s risk appetite and regulatory obligations SLA compliance MUST be tracked and reported to relevant stakeholders on a regular cadence A documented exception and risk-acceptance process MUST exist for vulnerabilities that cannot be remediated within the defined SLA Remediation SLA definitions, compliance metrics, and exception records MUST be retained in accordance with the organisation’s record retention policyExamples & Commentary Severity Classification: Use an industry-recognised standard such as CVSS for severity scoring, supplemented by environmental context. A critical CVSS score in an internet-facing application may warrant faster remediation than the same score in an isolated internal tool SLA Clock Start: Define clearly when the SLA clock begins — typically when the finding is first reported by a scanning tool, not when a developer triages it. This prevents delays from slow triage processes Exception Governance: Establish a lightweight but auditable exception process. For example, a developer can request a 30-day waiver for a high-severity finding with justification and compensating controls, approved by the security team. The waiver is time-bound and automatically escalates if not resolved Dashboard & Visibility: Maintain a vulnerability dashboard showing open findings by severity, SLA status (on-track, at-risk, overdue), and trends over time. Make this visible to engineering teams and leadership to drive accountability Relationship to Deployment Gating: Remediation SLAs define when a finding must be fixed. Deployment Gating (mi-12) defines whether an application with outstanding findings can be deployed. Together they provide both a timeline for remediation and a hard stop when that timeline is not metLinks NIST SP 800-53r5 SI-2: Flaw Remediation NIST SP 800-53r5 CA-5: Plan of Action and Milestones FIRST CVSS v4.0 Specification CISA Known Exploited Vulnerabilities Catalog FFIEC IT Handbook - Information Security
Deployment Gating
Deployment gating blocks promotion of software to the target environment when defined control criteria are not met.DescriptionDeployment gating enforces policy-based decisions at the point of deployment to ensure that only software meeting defined control criteria is promoted to production. Gates evaluate the posture of an artefact against the organisation’s defined policies and block deployment when criteria are not met. Without deployment gates, other controls in the framework are advisory only, and non-compliant software may reach the target environment despite known issues. In regulated financial services environments, deployment gating provides auditable evidence that organisational policy was enforced at every release.Requirements Deployment gating policies MUST be defined to specify which conditions block deployment, based on the organisation’s risk appetite and regulatory requirements Gating policies MUST be configurable per application or service to allow organisations to tailor risk thresholds (e.g., stricter policies for internet-facing applications, adjusted thresholds for internal tools) A documented override and emergency deployment process MUST exist for situations where deployment is critical despite unmet gate conditions. Overrides MUST require approval from an appropriate authority, include documented justification, and be time-bound Overrides MUST be logged and auditable, including who approved, the justification, and the conditions that were bypassed Gate evaluation results — including pass/fail status, which checks were evaluated, and any overrides — MUST be retained as part of the deployment record Gating policies MUST be reviewed and updated at least annually, or when significant changes to the threat landscape, regulatory requirements, or technology stack occurExamples & Commentary Policy Examples: An organisation might define the following deployment gates: no new critical or high SAST findings; no critical CVEs in dependencies without an approved waiver; DAST scan completed within the last 14 days; no secrets detected in the codebase. The specific policies will vary by organisation and risk appetite Pipeline Enforcement: Implement gates as required steps in the CI/CD pipeline. For example, a deployment job queries the vulnerability management platform for the artefact’s security posture and proceeds only if all gate conditions are satisfied Graduated Policies: Apply different gate strictness by environment and application criticality. A development environment might allow deployment with medium findings, while production for a customer-facing application blocks on anything high or above Emergency Overrides: Define an emergency deployment process for genuinely urgent situations (e.g., a critical production outage fix). The override should require real-time approval from a security lead or on-call manager, be time-limited, and automatically create a follow-up ticket to address the bypassed condition Visibility & Feedback: When a deployment is blocked, provide clear and actionable feedback to the engineering team: which gate failed, what findings caused the failure, and what actions are needed to proceed. Poor feedback loops lead to frustration and incentivise workarounds Relationship to Remediation SLAs: Deployment gating works hand-in-hand with Remediation SLAs (mi-11). SLAs define when findings must be remediated; gates enforce that deployments cannot proceed when those timelines are breached. For example, a high-severity SAST finding with a 7-day SLA may not block deployment on day 1, but will block it on day 8 if unresolvedLinks NIST SP 800-53r5 CM-3: Configuration Change Control NIST SP 800-53r5 CM-4: Impact Analyses NIST SP 800-53r5 CA-2: Control Assessments OWASP DevSecOps Guideline NIST SSDF SP 800-218 FFIEC IT Handbook - Information Security
System Inventory
Organizations must maintain a current, accurate inventory of all systems operating in production. Each inventory record must capture system ownership — including designation of a system manager who is an active employee — along with system criticality classification and data classification. The inventory must be linked to the developer toolchain and SDLC systems and it must be reviewed and updated on a defined cadence and upon any material change.DescriptionA well-governed system inventory is foundational to an organization’s risk management, compliance, and operational resilience posture. Financial services organizations operate complex, interconnected technology estates spanning on-premises infrastructure, cloud platforms, third-party hosted services, and internally developed applications.Without a reliable and current inventory, organizations cannot consistently apply security controls, assess blast radius during incidents, fulfill regulatory obligations, or make informed decisions about system decommissioning and lifecycle transitions.This control establishes the minimum requirements for maintaining a system inventory that is accurate, ownership-attributed, and enriched with the metadata necessary to support downstream risk and compliance processes. The inventory serves as a system of record that feeds into change management, access control, vulnerability management, business continuity planning, and regulatory reporting workflows.Requirements Inventory ScopeThe inventory must include all systems operating in production, defined as any system that:Supports a business process, customer-facing service, or internal operational function;Stores, processes, or transmits organizational or customer data; orIntegrates with or has a trust relationship to another in-scope system. Inventory AttributesEach inventory record must capture a set of attributes such as: System Name, System ID, System Description, System Manager, System Criticality Tier classification, Data Classification, Lifecycle Status (Active / Sunset Planned / Decommissioned) Inventory MaintenanceThe inventory must be reviewed in its entirety no less than annually, with individual records updated within 30 days of any material change (e.g., new system launch, change of System Manager, re-classification, decommission).A new system must be added to the inventory prior to or concurrent with its promotion to production. Systems must not enter production without an assigned System Manager and completed classification attributes.When a System Manager separates from the organization or changes roles, an updated System Manager must be designated and reflected in the inventory within 5 business days. System Criticality Classification - for example: Tier 1 - Business Critical, Tier 2 - High Criticality, Tier 3 - Medium Criticality, Tier 4 - Low Criticality. Data Classification - for example Restricted, Confidential, Internal, Public Linkage to Developer Toolchain and SDLC Systems Each inventory record must be traceable to its corresponding source code repository (e.g., GitLab, GitHub), CI/CD pipeline configuration, and artifact registry, enabling a continuous chain of custody from code commit through production deployment. This linkage ensures that changes to a production system are always attributable to an inventoried, ownership-attributed entity, and that pipeline-enforced controls — such as security scanning, compliance gates, and deployment policies — can be scoped and validated against a known system baseline. Tooling and AccessThe inventory must be maintained in a centralized, access-controlled system of record.Read access should be broadly available to authorized internal stakeholders. Write access must be controlled and auditable.The inventory system must support export and reporting capabilities to facilitate governance reviews and audit requestsExamples & CommentaryExample 1 — New System OnboardingA development team completes build and testing of a new customer-facing loan origination portal and prepares for production deployment. Prior to the production release, the team’s technology lead is designated as System Manager and the system is registered in the inventory with a criticality of Tier 1 (Mission Critical) — as it directly supports a revenue-generating customer process — and a data classification of Tier 1 (Restricted) due to the presence of PII and NPI. The inventory record is completed and approved before the deployment pipeline is authorized to promote the build to production.Example 2 — System Manager DepartureThe System Manager for a core risk calculation engine notifies HR of their resignation, with a last day in 10 days. Upon notification, the technology risk team triggers a System Manager transition workflow. A successor — a senior engineer on the same team — is designated and the inventory record is updated within 3 business days of the original System Manager’s departure, well within the 5-business-day requirement.Example 3 — Classification DisagreementDuring an annual inventory review, a data engineering team classifies their internal analytics platform as Tier 3 (Internal) for data classification, reasoning that the platform only contains aggregated metrics. Upon review, the risk team notes that the platform ingests and temporarily retains raw transaction records during ETL processing, which includes cardholder data. The classification is escalated and updated to Tier 1 (Restricted), triggering a review of the platform’s encryption posture and access control configuration.LinksThis control maps to asset inventory and lifecycle management requirements across five frameworks applicable to financial services organizations. Framework Provision Applicability Link FFIEC IT Handbook Info Security II.C.5; AIO III.B.1 US-supervised depository institutions, BHCs, and TSPs ithandbook.ffiec.gov NIST SP 800-53 Rev 5 PM-5 (System Inventory); CM-8 / CM-8(4) US federal agencies; FS best practice baseline PM-5 · CM-8 SOC 2 TSC (AICPA) CC6.1 — Inventory of Information Assets Service orgs with SOC 2 audit obligations aicpa-cima.com PCI DSS v4.0 Requirement 12.5.1 — System Component Inventory Organizations in scope for cardholder data protection pcisecuritystandards.org EU DORA Article 8 — Identification; Articles 28–30 (third-party register) All EU financial entities; effective January 17, 2025 eur-lex.europa.eu FFIECThe IT Handbook describes principles rather than prescriptive requirements. This control produces the evidence base — documented inventory, ownership, classification, and review records — that examiners typically request during examination cycles.NIST: PM-5 vs CM-8This control operates at the PM-5 level (organizational system inventory) rather than CM-8 (individual component inventory). Organizations should maintain both: this control satisfies PM-5 and anchors downstream CM-8 compliance. The System Manager attribute directly satisfies CM-8(4) — Accountability Information.SOC 2: CC6.1, not CC6.3The inventory obligation sits in CC6.1. CC6.3 addresses role-based access authorization and is a downstream consumer of the inventory this control produces, not a direct inventory requirement itself.PCI DSSOrganizations subject to PCI DSS should ensure the data classification attribute explicitly flags Tier 1 (Restricted) systems that are in scope for PCI, and should maintain network diagram traceability as a supplemental artifact.EU DORAThe criticality classification supports DORA’s identification of critical and important functions; Non-EU organizations with EU operations should assess applicability; analogous requirements exist under the UK FCA’s operational resilience rules (PS21/3 and PS24/16).