Chapter I — General Provisions
Art. 1
Subject Matter
Standard
▶
Defines what DORA covers: ICT risk management, incident reporting, resilience testing, third-party risk, and information sharing for financial entities.
Key Points
- Establishes uniform requirements for the security of network and information systems
- Covers ICT risk management, major incident reporting, testing, third-party risk, and information sharing
- Applies to financial entities and ICT third-party service providers
Art. 2
Definitions
Important
▶
57 definitions that DORA uses. Key ones you'll need:
Critical Definitions
- Digital operational resilience: The ability to build, assure, and review the operational integrity and reliability of an entity's ICT systems
- ICT risk: Any reasonably identifiable risk related to ICT that, if materialized, may compromise the security of network and information systems
- ICT-related incident: An unplanned event or series of events compromising the security of ICT systems with an adverse impact on availability, authenticity, integrity, or confidentiality
- Major ICT-related incident: An incident with a high adverse impact on network/information systems supporting critical or important functions
- Critical or important function: A function whose disruption would materially impair financial performance, service continuity, or compliance
- ICT third-party service provider: An undertaking providing ICT services (digital & data services through ICT systems on an ongoing basis)
- ICT concentration risk: Exposure to individual or highly correlated critical ICT third-party providers creating dependence such that unavailability could endanger the entity
- Threat-led penetration testing (TLPT): A framework mimicking tactics, techniques, and procedures of real-life threat actors perceived as genuine cyber threats
Security Engineer Takeaway: Bookmark these definitions. "Critical or important function" is the classification you'll use constantly — it determines which requirements apply at an enhanced level. "ICT concentration risk" is the new one that will drive your cloud strategy discussions.
Art. 3
Scope — Entities Covered
Standard
▶
Lists all 21 types of financial entities in scope. See the "Entities" section for the full breakdown.
In Scope
- Credit institutions (banks)
- Payment institutions & e-money institutions
- Investment firms & trading venues
- Insurance & reinsurance undertakings
- Central counterparties & central securities depositories
- Crypto-asset service providers
- Trade repositories, data reporting service providers
- Management companies, AIFMs
- Crowdfunding service providers
- ICT third-party service providers (under oversight framework)
- And more...
Art. 4
Principle of Proportionality
Important
▶
Requirements are proportionate to the entity's size, risk profile, nature, scale, and complexity of services. This is your friend — smaller or less complex entities can implement simpler controls.
What This Means
- Implementation should match the entity's size, risk profile, and complexity
- Microenterprises get a simplified ICT risk management framework (Art. 16)
- TLPT only applies to entities designated as significant by competent authorities
- Proportionality does NOT mean "optional" — the core obligations still apply
Security Engineer Takeaway: Use proportionality strategically. If your entity is smaller, document why certain heavy-lift requirements are met via proportionate measures. If you're a large bank, don't expect proportionality to buy you much slack — regulators will expect the full programme.
Chapter II — ICT Risk Management
Art. 5
Governance and Organisation
Critical
▶
The management body (board of directors, executive board) bears ultimate responsibility for ICT risk management. This is personal, not delegable.
Management Body Must
- Define, approve, oversee, and be accountable for the ICT risk management framework
- Define roles and responsibilities for all ICT-related functions
- Set and approve the digital operational resilience strategy (at least annually)
- Approve and review ICT business continuity & DR policies
- Approve and review ICT audit plans and internal audit activities
- Allocate adequate budget for ICT security awareness and digital operational resilience training
- Members must maintain sufficient knowledge and skills on ICT risk (with regular training)
Security Engineer Takeaway: This is your biggest lever. The board is personally accountable. When you need budget, headcount, or project priority, frame it as: "The management body is legally accountable under DORA Art. 5 and needs to demonstrate active oversight." Prepare board-level reporting that shows risk posture, open findings, and compliance gaps.
Art. 6
ICT Risk Management Framework
Critical
▶
The core requirement: maintain a comprehensive, documented, and regularly updated ICT risk management framework.
Requirements
- Shall include strategies, policies, procedures, protocols, and tools necessary for ICT risk management
- Must be documented and reviewed at least annually (and after major incidents)
- Must be audited by ICT auditors at regular intervals
- Must incorporate a digital operational resilience strategy with methods to address ICT risk
- The strategy must include how the entity's ICT risk management framework is implemented
- Must set the risk tolerance level for ICT risk
- Must include ICT third-party risk strategy
Security Engineer Takeaway: This is essentially requiring a documented security programme. If you have an ISMS (ISO 27001), you have a head start. Key gap: DORA explicitly requires a digital operational resilience strategy — a document that covers how you'll maintain operations during ICT disruptions. This goes beyond a standard InfoSec policy; it's more aligned with operational resilience / BCP.
Art. 7
ICT Systems, Protocols and Tools
Critical
▶
Requirements
- Use and maintain updated ICT systems, protocols, and tools that are reliable, have sufficient capacity, and are technologically resilient
- Must ensure availability, authenticity, integrity, and confidentiality of data
- Systems must be designed to be securely developed, deployed, maintained
Art. 8
Identification
Critical
▶
Know what you have, know what can go wrong, and document it all.
Requirements
- Identify, classify, and document all ICT-supported business functions, roles, and responsibilities
- Identify all sources of ICT risk including those from ICT third-party providers
- Maintain an inventory of all information assets and ICT assets (including hardware, software, network resources)
- Identify all ICT systems that support critical or important functions
- Map interconnections and dependencies between ICT assets, systems, processes, and providers
- Perform risk assessments upon major changes to the ICT infrastructure
- Maintain network architecture documentation updated at all times
Security Engineer Takeaway: This is your CMDB on steroids. You need a complete inventory of: every server, app, database, network segment, cloud service, SaaS tool, and their interdependencies. The dependency mapping is often the hardest part — consider using automated discovery tools. Pro tip: also map which ICT provider supports which function — you'll need this for Art. 28's register.
Art. 9
Protection and Prevention
Critical
▶
Requirements
- Develop and implement ICT security policies covering: information security, network security, access control, authentication, change management, patching, encryption, physical security
- Implement IAM policies: least privilege, need-to-know, segregation of duties
- Strong authentication mechanisms (MFA where appropriate)
- ICT change management procedures (documented, tested, approved)
- Patch management policies with appropriate timelines
- Encryption policies for data at rest and in transit
- Secure software development lifecycle (SDLC)
Security Engineer Takeaway: This maps directly to your standard security controls. Review your existing policies against each bullet above. Common gaps: formal ICT change management procedures (not just ITIL ticket flows), documented encryption standards, and a fully codified SDLC with security gates.
Art. 10
Detection
Critical
▶
Requirements
- Establish mechanisms to promptly detect anomalous activities (network performance issues, ICT-related incidents)
- Detection mechanisms must enable multi-layer control, define alert thresholds, and trigger incident response processes
- Adequate resources for monitoring and analyzing ICT threats and incidents
- For entities performing TLPT: detection capabilities should be tested as part of the TLPT scope
Security Engineer Takeaway: SIEM, EDR, NDR — you need all three layers. The regulation expects "multi-layer control" which means network + endpoint + application detection. Make sure your detection rules cover the DORA incident classification criteria so you can automatically flag potential major incidents.
Art. 11
Response and Recovery
Critical
▶
Requirements
- Establish ICT business continuity policy with plans for all critical systems and functions
- Develop and implement ICT response and recovery plans
- Estimate recovery time and recovery point objectives
- Switch to backup systems with minimal disruption
- Plans must consider different scenarios including cyber attacks and infrastructure failures
- Regular testing of BCP/DR plans (at least annually)
- Crisis communication procedures for internal and external stakeholders
- Dedicated crisis management function for major incidents
Security Engineer Takeaway: RTOs and RPOs must be documented for every critical function. BCP/DR plans must include cyber attack scenarios (ransomware, data destruction, cloud outage). Test annually at minimum. If you're only doing tabletop exercises, that may not be enough — DORA expects actual switchover tests.
Art. 12
Backup Policies and Procedures
Critical
▶
Requirements
- Develop and implement backup and restoration policies and procedures
- Backup scope and frequency aligned with criticality of the function
- Backups must be physically and logically separated from the source ICT system
- When restoring, backup systems must not directly connect to production until verified
- Regular testing of backup and restoration procedures
- For ICT systems supporting critical functions: backup must support recovery within the defined RTO/RPO
Security Engineer Takeaway: Air-gapped or immutable backups are implicitly required by the physical/logical separation mandate. Test your restore procedures regularly — not just "does the backup job complete" but "can we actually rebuild the system from this backup." Think ransomware resilience.
Art. 13
Learning and Evolving
Important
▶
Requirements
- Gather information from all ICT-related incidents and cyber threats
- Conduct post-incident reviews after major ICT disruptions
- Identify root causes and establish improvements
- Feed lessons into the ICT risk management framework updates
- Monitor effectiveness of the digital operational resilience strategy implementation
- Mandatory ICT security awareness and digital operational resilience training for all staff
- Monitor developments in ICT risk, technological developments, and threats
Security Engineer Takeaway: Post-incident reviews (PIRs) are mandatory, not optional. Build a PIR template that feeds findings into: (1) risk register updates, (2) detection rule improvements, (3) playbook updates, and (4) training programme adjustments. DORA also mandates security awareness training for all staff, including the management body.
Art. 14
Communication
Important
▶
Requirements
- Communication policies for internal staff and external stakeholders
- At least one designated person for handling communications during incidents
- Define communication plans for both ongoing incidents and post-resolution
- Must have policies on disclosure to clients (especially when incidents affect their services or data)
Art. 15
Further Harmonisation of ICT Risk Management Tools, Methods, Processes, and Policies
Standard
▶
Empowers the ESAs to develop Regulatory Technical Standards (RTS) further specifying the ICT risk management requirements. These RTS provide the granular technical detail beneath the principles in Articles 5–14.
Art. 16
Simplified ICT Risk Management Framework
Important
▶
Provides a lighter-touch framework for smaller entities (small and non-interconnected investment firms, payment institutions exempted under PSD2, etc.).
Simplified Requirements
- Still need a sound and documented ICT risk management framework
- Simplified documentation and formal requirements
- Still need to monitor and review the framework annually
- Minimise impact of ICT risk through the use of sound, resilient, and updated ICT systems
- Identify critical functions and key dependencies
Chapter III — ICT-related Incident Management, Classification, and Reporting
Art. 17
ICT-related Incident Management Process
Critical
▶
Requirements
- Define, establish, and implement an ICT-related incident management process
- Put in place early warning indicators
- Establish procedures to identify, track, log, categorise, and classify incidents
- Assign roles and responsibilities for different incident types and scenarios
- Plans for communication to staff, external stakeholders, media, and clients
- Report major incidents to senior management; inform management body of impact and response
- Establish incident response procedures to mitigate impact and ensure timely restoration
Security Engineer Takeaway: Your SIRP (Security Incident Response Plan) needs to integrate DORA requirements explicitly. Add a "DORA classification" step to your triage workflow. Every incident ticket should capture the classification criteria (clients affected, duration, data loss, etc.) so you can quickly determine if it crosses the "major incident" threshold.
Art. 18
Classification of ICT-related Incidents and Cyber Threats
Critical
▶
Classification Criteria for Incidents
- Number of clients/counterparts affected
- Reputational impact
- Duration and service downtime
- Geographic spread across Member States
- Data losses (availability, authenticity, integrity, confidentiality)
- Criticality of services affected
- Economic impact (direct and indirect costs)
Classification Criteria for Cyber Threats
- Criticality of services at risk
- Number of clients/counterparts potentially affected
- Geographic spread of areas at risk
Security Engineer Takeaway: Build these criteria into your SIEM correlation rules. If an alert fires on a system supporting 10,000+ clients, it automatically gets flagged for DORA major incident assessment. Create a scoring matrix that maps these criteria to your severity levels.
Art. 19
Reporting of Major ICT-related Incidents and Voluntary Reporting of Significant Cyber Threats
Critical
▶
Mandatory Reporting
- Initial notification: Submit to competent authority after classifying incident as major (within 4 hours, maximum 24 hours)
- Intermediate report: Within 72 hours — updated impact, root cause status, mitigation actions
- Final report: Within 1 month — full root cause, total impact, remediation measures, lessons learned
Voluntary Reporting
- Entities may voluntarily notify significant cyber threats they consider relevant (even if no incident occurred)
Clients
- When a major incident has an impact on clients' financial interests, the entity must inform them without undue delay
- Must inform clients of measures taken to mitigate the adverse effects
Security Engineer Takeaway: Pre-build your report templates. Have three templates (initial, intermediate, final) ready with all required fields. Automate data population where possible (from SIEM, ticketing). The 4-hour initial notification window is tight — your on-call process needs to include "DORA notification assessment" in the first 30 minutes of triage. Consider a SOAR playbook for this.
Art. 20
Harmonisation of Reporting Content and Templates
Important
▶
ESAs develop RTS specifying the content, timelines, and templates for incident reports. Standardised templates mean consistent reporting across the EU.
Art. 21
Centralisation of Reporting
Standard
▶
ESAs and ECB to explore the feasibility of establishing a single EU Hub for major ICT-related incident reporting.
Art. 22
Supervisory Feedback
Important
▶
Competent authorities must provide feedback and guidance to the reporting entity following their report. They may also share anonymised information about the incident with other entities to improve sector-wide resilience.
Art. 23
Operational or Security Payment-related Incidents
Important
▶
Special provisions for credit institutions, payment institutions, and e-money institutions regarding payment-related incidents. DORA's incident reporting replaces the PSD2 incident reporting for these entities.
Chapter IV — Digital Operational Resilience Testing
Art. 24
General Requirements for Digital Operational Resilience Testing
Critical
▶
Requirements
- Establish, maintain, and review a sound and comprehensive digital operational resilience testing programme
- Testing programme must be proportionate to the entity's size and risk profile
- Include a range of assessments, tests, methodologies, practices, and tools
- Follow a risk-based approach prioritising critical and important functions
- Testing must be undertaken by independent parties (internal or external)
- Establish procedures to prioritise, classify, and remedy findings
- Ensure all identified issues are remediated or risk-accepted with appropriate approval
Security Engineer Takeaway: "Independent" means your own red team or an external firm — not the team that built the system. All findings need to be tracked to closure. Build this into your vulnerability management workflow: DORA finding → JIRA ticket → remediation → retest → closure documentation.
Art. 25
Testing of ICT Tools and Systems
Critical
▶
Required Tests
- Vulnerability assessments and scans
- Open-source analyses
- Network security assessments
- Gap analyses
- Physical security reviews
- Questionnaires and scanning software solutions
- Source code reviews (where feasible)
- Scenario-based tests
- Compatibility testing
- Performance testing
- End-to-end testing
- Penetration testing
Security Engineer Takeaway: This is a comprehensive test catalogue. You probably do many of these already, but document them all as part of your "DORA testing programme." The key addition many teams miss: open-source analysis (SCA tools for dependency vulnerabilities) and physical security reviews. Make sure you have evidence of each test type being performed at least annually.
Art. 26
Advanced Testing — Threat-Led Penetration Testing (TLPT)
Critical
▶
Requirements (for designated entities)
- Carry out TLPT at least every 3 years
- Competent authority identifies which entities must perform TLPT based on systemic impact, criticality, and ICT maturity
- Must cover several or all critical or important functions
- Must be performed on live production systems
- Scope must include ICT services provided by third parties (with their involvement)
- Entity must perform a risk management assessment and obtain approvals before testing
- Testing scope, methodology, and results must be validated by the competent authority
- Results and remediation plans must be attested by the competent authority
Security Engineer Takeaway: TLPT = TIBER-EU style red team assessment. This is a major undertaking: real threat intel drives the scenarios, testing happens in production, and the regulator validates results. If you're in scope, start planning 12+ months ahead. You'll need: threat intel provider, qualified red team (external), strong blue team documentation, and a robust risk management process for the testing itself (it's production, after all).
Art. 27
Requirements for TLPT Testers
Critical
▶
External Tester Requirements
- Highest suitability and reputability
- Technical and organisational capabilities for TLPT (specifically threat intelligence and penetration testing)
- Certified by an accreditation body or adhere to formal codes of conduct/ethical frameworks
- Provide independent assurance/audit reports on sound management of risks associated with TLPT
- Covered by professional indemnity insurance
Internal Testers
- May use internal testers but must also engage external testers for every third test
- Competent authority may restrict internal testing based on specific conditions
Chapter V — Managing ICT Third-Party Risk
Art. 28
General Principles for Third-Party Risk Management
Critical
▶
Requirements
- Financial entities remain fully responsible for compliance even when using ICT third-party providers
- Must adopt and regularly review a strategy on ICT third-party risk
- Maintain and update a register of information on all contractual arrangements with ICT providers
- Distinguish between arrangements supporting critical/important functions and those that don't
- Report the register to competent authorities at least annually
- Inform competent authorities in a timely manner about planned new arrangements for critical/important functions
- Before entering into contracts: identify and assess all relevant risks (including ICT concentration risk)
- Conduct due diligence on prospective ICT providers
- Only contract with providers that comply with appropriate information security standards
Security Engineer Takeaway: The register of ICT providers is a major deliverable. It must be comprehensive: every cloud service, SaaS tool, managed service, data provider. Flag which ones support critical/important functions. This register gets submitted to your regulator annually, so treat it like a living document. Consider a dedicated TPRM (Third-Party Risk Management) tool.
Art. 29
Preliminary Assessment of ICT Concentration Risk and Sub-outsourcing
Important
▶
Requirements
- Assess whether entering into a contract would lead to ICT concentration risk
- Weigh benefits and costs of alternative solutions
- Assess whether sub-outsourcing conditions are complied with (chaining of providers)
- Consider risks arising from the entity and the provider being in different jurisdictions
Art. 30
Key Contractual Provisions
Critical
▶
Mandatory Clauses for ALL ICT Contracts
- Clear and complete description of all functions and services
- Locations where data will be processed and stored (including subcontractors)
- Data protection provisions including data access, recovery, and return/destruction on termination
- Service level descriptions with precise quantitative and qualitative performance targets
- Provider must assist during ICT incidents related to the service at no additional cost
- Provider must participate in the entity's ICT security awareness programme
- Obligation for the provider to implement and test BCP measures
- Right to monitor the provider's performance on an ongoing basis (audit and access rights)
- Termination rights with adequate notice periods
- Cooperation of the provider with competent authorities
Additional Clauses for Critical/Important Functions
- Full SLA targets covering availability, reliability, and response times
- Notice periods and reporting obligations when provider developments may affect the service
- Comprehensive exit strategies and transition plans
- Provider must participate in the entity's TLPT programme
- Provider must grant unrestricted rights of inspection and audit
- Agreed-upon termination rights and minimum notice periods
Security Engineer Takeaway: This article will keep your legal team busy. Create a contract review checklist mapped to Art. 30 requirements. Every renewal is an opportunity to add missing clauses. Key items: audit rights (you need to be able to audit or inspect your cloud provider), data location transparency, and exit strategy. For cloud providers, check their shared responsibility models against these requirements.
Art. 31
Designation of Critical ICT Third-Party Service Providers
Important
▶
ESAs designate ICT third-party providers as "critical" based on criteria including:
- Systemic impact on the financial sector if the provider fails
- Number and type of financial entities relying on the provider
- Degree of substitutability (how easy to replace)
- Number of Member States where the provider's services are used
- Degree of dependence of the financial entities on the provider's services
Art. 32–44
Oversight Framework for Critical Third-Party Providers
Standard
▶
Articles 32–44 establish the EU-level oversight framework for critical ICT third-party providers. Key elements:
Oversight Structure
- Lead Overseer (Art. 33): Appointed from EBA, ESMA, or EIOPA based on which financial sector uses the provider most
- Powers (Art. 35): Request information, conduct on-site/off-site investigations, issue recommendations
- General investigations (Art. 36): Launch investigations into the provider's operations
- Inspections (Art. 37): Conduct on-site inspections of the provider
- Follow-up (Art. 38): If recommendations not followed, can request financial entities to suspend/terminate
- Costs (Art. 43): Oversight costs charged to the critical providers
- Non-EU providers (Art. 31): Must establish a subsidiary in the EU to fall under oversight
Security Engineer Takeaway: You won't manage this framework directly, but you need to know: if your critical cloud provider gets a recommendation from the Lead Overseer and doesn't comply, your regulator could ask you to suspend or terminate the service. This is why exit strategies (Art. 30) matter — you need a viable plan B for every critical provider.
Chapter VI — Information-Sharing Arrangements
Art. 45
Information-Sharing Arrangements on Cyber Threat Information and Intelligence
Important
▶
Provisions
- Financial entities may exchange cyber threat information and intelligence amongst themselves
- Including indicators of compromise (IoCs), TTPs, security alerts, and configuration tools
- Sharing must protect personal data (GDPR), business confidentiality, and competition law
- Must be carried out within trusted communities
- Sharing arrangements must define conditions for participation and handling of shared information
- Entities must notify competent authorities of their participation in sharing arrangements
Security Engineer Takeaway: Join FS-ISAC or your national financial CERT's sharing programme if you haven't already. Set up STIX/TAXII-compatible ingestion in your threat intel platform. When sharing, use TLP (Traffic Light Protocol) markings. DORA gives you the legal foundation to tell management "the regulation encourages this" when advocating for threat intel sharing programme budget.
Chapter VII–IX — Competent Authorities, Delegated Acts, and Final Provisions
Art. 46–56
Competent Authorities and Cooperation
Standard
▶
Establishes the supervisory framework: which national competent authorities enforce DORA, how they cooperate across borders, and their powers for administrative penalties and remedial measures.
Key Points
- Each Member State designates competent authorities for DORA supervision
- Competent authorities have supervisory, investigatory, and sanctioning powers
- Cooperation mechanisms between national authorities and ESAs
- Exchange of information between supervisors
- Administrative penalties and remedial measures defined by Member States (including criminal penalties where applicable)
Art. 57–64
Delegated Acts, Amendments, Review and Final Provisions
Standard
▶
Final provisions covering delegated acts (empowering the Commission to specify technical details), amendments to existing regulations, the review clause (3-year review), and the entry into force / application date.
Key Dates
- DORA entered into force on 16 January 2023
- Applies from 17 January 2025
- Review by 17 January 2028 (Commission report on appropriateness of enhanced requirements for audit and IT services)