Another European Union regulation that impacts financial firms? Surely not again! Alas, the Digital Operational Resilience Act (DORA) came into force on 17th January 2025. This simple guide will show you what DORA means for your firm without reading all the EU reg docs!

For technologists in spaces such as electronic trading, and/or in fintech startups, this should be mainly checking over what you already do, but for business owners/sponsors, it can sound bewildering. What does it cover? Broadly speaking, you are required to treat technology & cyber risk as seriously as market, reputation and operational risk and actively preventing the possibility of cyber breaches. Most recently, however, is managing how you use and are affected by artificial intelligence (AI) technology, which ultimately revolves around the data you store, e.g. if you are using AI for trading and hedging strategies, or in coming up with investment strategy idea generation for clients.
DORA 101
The objective of DORA is that financial firms can withstand, respond to and recover from all types of technology threats, interruptions and disruptions. To give you the big picture, DORA covers these six topics:
Topic | Questions to ask yourself |
Technology risk management | Have we implemented governance and risk management processes to help us identify, monitor and manage key technology risks? (e.g. you have a legacy system that is key to your operations) |
Third-party risk management | Do we know who are our critical technology service providers are, and have we put into action agreements that protect our cybersecurity and business continuity? |
Digital operational resilience testing | Have we defined our testing strategy, and involved in-house or external testing expertise who know our systems? Do they understand how we expect them to be approaching this challenge? How often do we run these tests? |
Tech-related incident reporting | Is our incident management process robust enough and consistent across all our platforms so we can report accurately to regulators on our progress and learnings from incidents? |
Information sharing (incl. cyber threat) | Do we partake in information sharing organisations (e.g. FS-ISAC, that reports on cyber threats for finance firms), and using the intelligence on threats that is shared with us so we reduce our risk? |
Oversight of critical third-party providers | Are we satisfied with the progress on operational resilience of our critical technology providers, and if not, what are we doing to mitigate this risk, and by when? |
These topics can help you take a fresh look at the key technology you rely on for your business. As most will be aware by now, financial firms are often technology houses and employ large numbers of software and infrastructure engineers. Are you relying too heavily on third party providers whose lack of resilience could cause existential harm to your business? One thing that never ceases to amaze us at Agile Mind is firms saying they release software changes a few times per year. This pattern is from over a decade ago. Modern firms are releasing weekly. Yes, and it is counter-intuitive to firms that are doing release that are months apart as the frequency of releases is correlated to operational risk. The more often you do it, the smaller the release and the less difficult to back out if it has problems. Contrast that with a large-scale release: trying to unpick issues from within a big release is a nightmare.
Much of this is culture. Learning from technology incidents makes everyone better at their jobs and also feeds into how software is built and deployed if engineers are being encouraged by their leadership to assume there will be failures. Designing software that you assume can go wrong makes for higher uptime and less defects. You don’t get this by outsourcing to a cheap location and assuming everything will be smooth. The world of digital technology and products your clients expect is too complex for making savings: one regulatory fine will wipe that out.
So what’s the upshot?
The good news is that you can address these issues by embedding them into how you operate without having to tip everything on its head. Here are some of our suggestions:
- What problem are we solving? When building a new feature, asking this question helps you get the feature from the perspective of who is using it and then you can get questions clarified and know more fully what the purpose of the work you are doing is;
- Learning from incidents: An issue that affects clients should be a pause for reflection and looking at whether it is a repeat of an area of weakness. “How can we get better?” is the question to ask and not “who did this?” If nothing else sunk in then taking away this single bullet point will be worth your while!
- Why are we not testing our system resilience more often? This is key to know what areas are overly complex, or brittle. Are you confident that an external ethical hacker could get in and access what they shouldn’t? Investing in this area should be part of your BAU spend
- AI smarts: Be sure to keep your data safe (including country restrictions for different jurisdictions) and that people in this area are training their AI on data that is being renewed and are asking what sort of questions are not being asked of the AI to understand gaps in the algos. This complex and emerging technology!
Why not talk to us to find out how we can help you?
75 total views, 2 views today