Retro Settlement Nuisance: The RSN Paradox Championed by Deloitte and Friends
Examining the RSN PoC testing distributed ledger technology for financial markets. Grand promises, outdated tech, and a puzzling vision for the future of settlement.
Before the advent of AI, throwing a bombshell like SIFMA releasing three long documents about a new settlement and financial market infrastructure over a weekend and trying to read and form an opinion on it before Monday morning would have been a daunting task. It’s still daunting but within the realm of possibility. So let’s tackle the “Regulated Settlement Network” (RSN) Proof-of-Concept (PoC), neatly summarised in three documents:
A business applicability report (approximately 55 pages)
A technical feasibility report (approximately 40 pages)
A legal viability report (almost 100 pages)
Lawyers are twice as chatty as business and product folks, it seems.
The first line you read about the RSN PoC declares that it
"sets out to explore the capability of shared ledger technology to address how the above risks and inefficiencies could be reduced through tokenization [..].”
What "above risks"? Are they risks I should be aware of but are conveniently displayed somewhere above my screen? The mystery of the vanishing context continues.
RSN seems to be the reincarnation of the Regulated Liability Network (RLN), which announced this PoC in May, initially under the RLN banner and then rebranded as RSN. By focusing on settlement, RSN shifts the conversation to operational efficiency rather than regulatory oversight of deposits or potential conflicts with central bank policy, along with other politically sensitive topics. Or at least, that’s my educated guess.
RLN, does it ring a bell? Oh yes, I wrote about this before: RLN Really Lofty Notions. Not bad, Deloitte, but I’m onto you. So, RSN—what shall we call it? Rarely Settled Network or Retro Settlement Nuisance. Let’s go with that for now. If I think of a better idea, I’ll simply change the name without telling anyone.
The RSN Hypothesis
I wonder what it was like in those working groups—perhaps a bit like those international conferences with simultaneous translations between different languages. Maybe that’s why the participants didn’t notice that they thought they were analysing one thing but were, in fact, debating something else entirely—only for the meaning to get lost in translation.
SIFMA is quoted as saying:
“The project is a valuable opportunity to explore the potential of distributed ledger technology (DLT).”
Or TD Bank:
“The RSN initiative proved that distributed ledger technology can provide significant efficiencies [..].”
Distributed ledger technology! Yet all the reports use a different term: shared ledger technology. Huh?
"Shared ledger" is far less established than DLT and could refer to all sorts of arrangements, such as Splitwise, an app that lets you split a restaurant bill among friends using a shared ledger—but nothing resembling a distributed ledger. A shared ledger could simply mean any collaborative, centralised digital record-keeping system. It’s not synonymous with DLT or blockchain, which introduce specific technical and architectural principles aimed at decentralisation and trustless operation.
But the authors love inventing their slightly-off terminology. Their ‘SLT’ (not DLT) provides a “common source of truth” instead of a single source of truth, “simultaneous settlement capabilities” instead of RTGS DvP, and “reliability” instead of system resiliency. They conflate automation and programmability to the point of being misleading. Automation reduces human intervention, but programmability allows complex conditional instructions ("if this, then that"). Not all automation is programmable, and not all programmable systems are automated.
And they do it again with immutability when they conflate it with auditability. Blockchain’s immutability ensures data integrity by making records tamper-evident, but auditability depends on how those records are interpreted and verified. A hash itself proves nothing unless it can be linked back to comprehensible and verifiable transaction data.
This raises so many questions. Why, for one. And did anyone notice? And if not, why not? The authors then promise that if their ‘SLT’ PoC could demonstrate that these unclear requirements could be met,
“an always-on, interoperable, and programmable industrywide settlement infrastructure”
could follow. A programmable infrastructure. DTC is ‘programmable,’ I suppose, since they run software which could be updated and taught new tricks. See—they do it again. Declaring things as if they are obvious and straightforward when they are anything but.
Technology environment
What did they have? Sandbox only and GUI access only. What does that imply? A sandbox is, by definition, a controlled and isolated testing environment. This means that any "settlement" achieved in the PoC has no bearing on the operational complexities or risks involved in real-world settlement processes. Settlement in the sandbox does not translate to legal or operational settlement in production systems, where liabilities, compliance, and governance come into play. They used 'simulated, dummy data' but claim to have proven actual business acceptance.
GUI (Graphical User Interface) access typically suggests a front-end application where users manually input or view data. This is counter to the thesis of automated and highly efficient next-gen technology-enabled settlement systems. If participants interacted with a web portal without connection to bank systems, as stated in the report, this could just as easily have been a mockup or a PowerPoint presentation streamed as a web app.
My point is that such an approach is, by definition, unsuitable to support the grandeur of the claims this report wishes to make. Grandstanding without a proper foundation. To be honest, it feels almost pointless to read the rest, and I’m only on page 6.
Operating model
What we would get is Financial Market Infrastructure (FMI) to operate a a shared ledger that
“consists of tokenized securities and tokenized central bank and commercial bank deposits where each institution operates its own partition.”
There's no standard concept of "partitions" as logical or functional subdivisions within a single DLT system. The term could be interpreted in different ways, none of which fully capture what they seem to describe. If they mean creating separate DLT instances for different participants, that's not a "partition" in the traditional sense but a wholly independent chain. These would have no intrinsic relationship unless interoperability layers are explicitly built (e.g., bridges or APIs). If they are trying to say that a single DLT instance could logically partition data (e.g., by tagging or isolating certain transactions), this is not native to blockchain/DLT design.
Technical design
The report mentions Digital Asset as a technology partner. DAML, which is software developed by Digital Asset, introduces the notion of something they call a "partition." In contrast to the conventional understanding of DLT, a system built with their model avoids having a global state. Instead, each participant node handles its own transactions, only interacting with others as necessary. They call this "sync domains," which serve as coordination points but don't require every participant to process every transaction (unlike a traditional blockchain). It seems that what they mean by "partition" is logically splitting participants and transactions into different nodes or sync domains. However, this isn’t a true "partition" in the DLT sense—it’s more like isolated sub-networks. This is closer to sharding or localized processing than traditional DLT. A blockchain operates on global consensus, where every node has a complete copy of the ledger. By removing the global state and enabling localized transaction processing, they’ve veered closer to a federated or hybrid system with a microservices architecture and optional shared views. Calling this "blockchain" or "DLT" is a stretch!
Designing an operating model based on this technology approach comes with a cost because transactions processed this way generate multiple copies of data:
Encrypted envelopes for participants.
Decrypted contracts for private stores.
Metadata for the sync domain and sequencers.
Digital Asset, in their own documentation, estimates storage requirements as 5x or more for each transaction, depending on complexity. If every transaction needs to be stored and indexed multiple times, the storage costs could scale disproportionately. This is especially problematic for high-frequency systems like financial markets.
They aim for asynchronous submissions to improve throughput, but they warn that conflict detection must remain sequential, which can be a bottleneck. The reliance on sequential conflict detection highlights a fundamental scalability issue. If high throughput depends on perfect workflow design, the system might not handle real-world complexities well.
They borrow DLT terminology without adhering to its core principles, creating a confusing mix of ideas, and it is unclear whether the authors realize this problem or understand the subject matter sufficiently.
Technical findings
And it gets worse because, despite all these constraints, they claim the following technical findings:
“The shared ledger technology enabled synchronized balance sheets across participants, eliminating traditional delays associated with proprietary databases and batch processing [..] leveraging interoperability solutions, such as the Swift interlinking prototype and direct API integration [..]”
How can this be if they just told us participants operated in a sandbox with GUI access only? Synchronizing balance sheets would require automation and real-time API integrations with participants' systems, harmonization of internal accounting rules, profit attribution, and cost allocation principles, as well as changes to financial and accounting workflows. Simply put, a GUI-only setup cannot achieve this level of integration or synchronization—it’s just a manual interface, disconnected from the operational backbone.
These conflicting statements undermine the credibility of the report. Either they are overhyping what they’ve achieved in the sandbox (claiming capabilities far beyond what was realistically tested), or they’re glossing over the vast technical and operational complexities involved in implementing this in a real-world, regulated environment. Neither option is a good look.
Incoherent Legal Framework
I have to say, what they wrote about the legal aspects of this model is probably the worst I have ever read on such topics. There is a core contradiction forming the basis of their argument:
On one hand, RSN doesn’t "hold" assets, and the legal framework is claimed to remain unchanged.
On the other hand, the text hints at operational realities where RSN plays a central role in handling and recording asset ownership and transfers, potentially requiring regulatory exemptions or oversight.
This duality—claiming RSN changes nothing legally because the tokens have no “independent legal significance,” while simultaneously aiming for “a system that includes holding and transfers of both deposits and securities”—is both absurd and impassable. I cannot understand how this could possibly make any sense.
And how do they aim for settlement finality—a legal concept dealing with the moment and robustness of ownership transfers—when their system is not used to record ownership or transfers? How can this FMI claim legal relevance over assets it doesn’t manage? It’s frighteningly absurd, especially coming from SIFMA and a group of well-respected banks and market participants.
It’s possible that I am completely misunderstanding their proposal, but I guess that’s a risk I have to take.
Incoherent Claims: Liquidity Management
“Better liquidity management: Real-time visibility into a firm’s collateral and cash position while not relying on batch cycles can allow firms to better manage their liquidity and optimize collateral.”
Fedwire and many bank cash management systems already provide real-time or near-real-time updates on cash positions. The real-time element is inherent in these systems because they directly process and settle transactions. The problem lies not with the cash system but potentially with batch processes upstream or downstream, such as determining how a single market payment splits between underlying beneficiaries. Delays can arise when the information needed to allocate funds is not integrated into the payment itself. This is an issue that cannot be solved by introducing an FMI that lacks direct control over, or access to, the underlying assets or any insight into the purpose of market movements.
The RSN claims to provide "real-time visibility into a firm's collateral and cash position." However, this would only be true if RSN were directly integrated into banks' operational systems and could reflect transactions as they occur across all relevant systems. Without this integration, RSN would merely add another layer of abstraction, introducing potential latency rather than improving real-time visibility.
It is illogical to suggest that banks would rely on an external FMI to provide insight into their liquidity positions. Banks already maintain internal systems specifically designed to track cash, collateral, and liquidity. If these internal systems are batch-based, the solution lies in upgrading those systems, not adding an FMI that operates outside the actual flow of assets.
Incoherent claims: Settlement of a bond
RSN describes a few assumptions:
“Trade terms, such as security CUSIP, trade amount, and price, are agreed upon prior to being submitted to the RSN, All payment data is transferred in ISO 20022 format, including data necessary for compliance checks by each of the parties. [..] KYC, AML, and CFT checks [..[] was out of scope for the PoC. Prior to a final transaction signature, any party of the transaction can reject the transaction proposal, even if previously accepted.”
If we’re talking about securities, I send one instruction—not payment and securities separately. If I have a matched trade pending settlement in DTC, and RSN rejects it, I still have the obligation in DTC. AML and sanctions checks, which are notoriously difficult and critical, are conveniently ignored as being out of scope for the PoC. What exactly is this achieving? It’s unclear to me. Is the idea to copy over DTC data into RSN so that I can look at information I already have? The RSN won’t provide a bank with more visibility than their existing systems.
The assumptions presented also raise further questions:
Trade terms, such as security CUSIP, trade amount, and price, are agreed upon prior to being submitted to the RSN FMI for on-chain matching and distribution of settlement approval requests. But if all these terms are already agreed upon, what added value is RSN providing?
Any party to the transaction can reject the transaction proposal prior to a final transaction signature, even if it was previously accepted. This introduces operational uncertainty. What happens when one party rejects the transaction in RSN while DTC processes it as settled? How is this scenario reconciled?
Ultimately, RSN appears to function as a redundant overlay without addressing the complexities or adding meaningful improvements.
Retro-Futuristic Technology Choices
And to make this perfect, the RSN proposes to use Receive vs Payment (MT541) and Delivery vs Payment (MT543) messages and the like. So not only do they plan to continue using the outdated ISO15022 standard—which is almost comical given that we’re talking about some form of distributed ledger technology—but going forward, a single business event would require both an MT and an MX message, plus additional digital signatures.
Sorry, I was premature—the icing on the cake is the glossary at the end of the business document (yes, I skipped about 20 pages in the business applicability report because it’s pages upon pages of absurdities). There we read this:
“Transparency: Characteristic of blockchain technology that allows all participants to view and verify transactions on the network.”
In case you forgot, the RSN hypothesis is that they will bring more transparency. But not the kind of transparency they define, because the RSN explicitly does not allow participants to view and verify all transactions. It is a completely irrational document that claims one fact and denies it the next moment without seeing any problem with this contradiction.
I will have a look at the technical and legal documents in more detail, but I guess this blog covers it all for now.