A trillion-dollar warning for bank CISOs
Finance runs on trust. Signed software. Verified identities and proven integrity. Quantum computing threatens those assumptions, but not in the way most commentary suggests. The real quantum risk shows up when trusted access and long‑lived data move without visibility.
A recent Citi Institute report modeled what would happen if a quantum-enabled disruption hit a top-five U.S. bank’s access to a critical payment rail. Citi estimated $2 to $3.3 trillion in GDP-at-risk from a single-day disruption and the recession scenario that follows, roughly a 10 to 17 percent hit to annual GDP.
That is the board-level why. The operational why is more direct. Quantum exposure already exists anywhere sensitive data is being copied into places you don’t monitor. In financial services, it shows up in three familiar areas: human behavior, privileged access, and unstructured data.
Quantum will eventually undermine some public-key encryption and signature assumptions. The more urgent question for boards today is simpler: how much sensitive data is already leaving your environment under the cover of normal work, ready to be decrypted later under a “harvest now” model?
Why quantum readiness isn’t just a crypto upgrade
Post-quantum cryptography (PQC) matters. Standards are moving, vendors are updating libraries, and migration planning is real work. But PQC only protects the paths you know about.
Finance has a different problem: sensitive information rarely stays in the systems designed to protect it. It spreads. It gets copied. It gets transformed. And it often ends up in unstructured places where controls are weakest.
What “harvest now, decrypt later” means for banks
The key concept is harvest now, decrypt later (HNDL). Adversaries don’t need functional quantum computers today. They can intercept or steal encrypted data now, store it, and wait. When quantum capability is sufficient to break widely used public-key schemes, long-lived financial data exfiltrated years earlier can become readable.
Financial institutions are obvious targets because so much of their data retains value for a decade or more:
- Transaction histories and payment archives
- Customer identity and KYC records
- Wealth management and high net worth profiles
- Trade, model, and strategy data
- Loan, mortgage, and securities documentation
Citi cites that the probability ranges from 19 to 34 percent for widespread breaking of public-key encryption by 2034, rising to 60 to 82 percent by 2044. Even conservative timelines fall inside typical retention periods for high-value financial data.
The real vulnerability is behavior plus unstructured data
Most quantum conversations focus on algorithms and hardware. Boards hear about NIST standards, key encapsulation mechanisms, and timelines. That work is critical, but it only protects what those algorithms touch. In reality, the data that would be most damaging in a quantum era rarely stays confined to carefully managed systems.
Over time, sensitive financial data flows into:
- Spreadsheets and analysis workbooks
- Local exports and offline reports
- Email attachments and chat threads
- Shared drives and collaboration spaces
- Screenshots, PDFs, and presentations
- Unstructured logs in internal systems
- AI and GenAI tools used for quick analysis
Each copy can be exfiltrated quietly today by a malicious insider, a compromised contractor, or an external actor abusing legitimate access. Many of these flows look like normal work. If activity is not monitored at the behavior level, they’re rarely flagged in real time. Quantum doesn’t change how insiders move data. It changes the payoff.
For example:
- A data engineer exports a large slice of transaction history for a model experiment and leaves it in a personal cloud folder.
- A relationship manager forwards detailed client reports to a personal inbox to finish work after hours.
- An analyst uses a Shadow AI assistant to summarize internal risk reports, not realizing that the underlying content is stored outside of governed systems.
These are everyday behaviors. In a quantum context, every uncontrolled export of long-lived sensitive data is a latent breach, even if it looks harmless now.
Privileged access will feel quantum pressure first
Citi walks through what a cryptographically relevant quantum computer could do to public-key encryption and digital signatures. When authenticity signals weaken, identity becomes a less reliable proxy for intent. In financial services, that intersects directly with insider risk.
As trust degrades, security leaders have to ask different questions:
- Is this how this user typically accesses this system?
- Is this the normal volume, frequency, and destination for this kind of data?
- Does this pattern match expected work for this role, at this time, from this location?
- Does GenAI use align with policy, or does it introduce a new export path?
Quantum risk not only threaten algorithms. It also exposes how much we have leaned on identity infrastructure as a proxy for intent.
Regulators are moving faster than many banks expect
On the standards side, NIST published three PQC standards (FIPS 203, 204, 205).
On the policy side, governments are treating record-now-decrypt-later as a current risk exposure driver, not a future event. A 2024 U.S. federal report on PQC migration explicitly calls out record-now-decrypt-later attacks and prioritizing systems containing data that would still be sensitive in 2035.
For Europe, the European Commission and Member States published a coordinated PQC implementation roadmap with milestones that included moving high-risk systems by 2030, with broader transition goals extending to 2035.
In Israel, the Bank of Israel issued banking-system preparedness guidance that highlights HNDL as a key risk driver.
For the financial sector specifically, the G7 Cyber Expert Group has also published a roadmap for coordinated transition planning.
Regulators are paying attention to two things. First, whether institutions are migrating to quantum-safe cryptography. Second, whether institutions understand where sensitive data lives, who has access, and how it moves. The first can be answered with architecture diagrams and roadmaps. The second requires evidence.
Evidence looks like:
- Inventory of where public-key cryptography is used across systems and third parties
- Mapping of long-lived data sets, including unstructured and user-managed locations
- Monitoring of privileged and third-party access patterns
- Controls that detect and respond to unusual behavior around sensitive data
This is where behavioral analytics and risk-adaptive data loss prevention (DLP) become part of quantum readiness, not just insider risk hygiene.
What financial security leaders can do now
Quantum risk is a multi-year journey, but the actions that reduce exposure most are available today:
- Prioritize long-lived data sets. Start with data that would still be damaging if exposed in 10 to 20 years. Combine your existing classification with behavioral lineage to see where those assets actually live.
- Map human and AI behavior around those data sets. Understand who accesses sensitive data, how often it’s exported, where it travels, and which AI tools show up in the workflow.
- Tighten monitoring of privileged and third-party access. Focus on administrators, developers, vendors, and service accounts with broad reach. Look for behavioral deviations, not just new accounts.
- Align PQC migration with exposure reality. Use behavioral and data movement telemetry to prioritize which systems, vendors, and use cases migrate first.
- Prepare regulator-ready evidence. Document not only your cryptography plans, but also how you monitor long-lived data, measure behavioral risk, and enforce risk-adaptive policies.
Why behavior and data lineage need to be the control plane
Most legacy controls see artifacts rather than intent. Traditional DLP sees files and patterns. Identity systems see credentials and groups. Network tools see protocols and ports. Those are useful signals, but they rarely explain why data is moving. In a quantum-aware world, boards and regulators will ask different sets of questions:
- Which data sets would cause the most damage if decrypted in 10 to 20 years?
- Which roles and third parties touch those data sets most frequently?
- Where do those data sets travel once they leave core systems?
- Can you distinguish normal work from staging and quiet exfiltration behavior?
- Can you intervene before a quiet harvest becomes permanent?
Answering those questions requires behavioral context plus lineage. That’s where DTEX is designed to help.
How DTEX supports quantum readiness in financial services
DTEX helps financial institutions reduce HNDL exposure by surfacing risky data movement before it turns into exfiltration, and applying risk-adaptive controls that detect, deter, or disrupt last-mile leakage. With behavioral analytics and data lineage, teams get the context to act early, protect crown jewels, and prove control when it matters most
The bottom line
Quantum computing will force the largest cryptography upgrade in history. For financial services, the more urgent question is how much long-lived sensitive data is already exposed through everyday behavior. Reduce what can be harvested now, and you reduce what can be decrypted later.
Subscribe today to stay informed and get regular updates from DTEX
FAQ: quantum cyber risk in finance
No one can name an exact date, but credible estimates and regulators treat it as a planning risk within the next 10-20 years. That’s why migration to post-quantum cryptography is underway now.
PQC refers to cryptographic algorithms designed to resist attacks from both classical and quantum computers. NIST has standardized initial PQC algorithms and provides migration guidance for how to adopt them.
It’s the strategy of stealing or intercepting encrypted data today, storing it, and decrypting it later once quantum makes current public-key cryptography breakable.
It increases the value of what insiders (or compromised accounts) can quietly take today. If long-lived financial data is copied into unmonitored places, quantum decryption later turns that quiet exposure into an irreversible breach.
Digital signatures underpin software trust and identity verification. If signature schemes become forgeable, malicious software can appear trusted and attackers can impersonate legitimate publishers or services.
Start with inventory: where do you rely on public-key cryptography and signatures, and which long-lived data sets would be damaging if decrypted later? Then prioritize migration and add behavior-based monitoring around those assets.






