Personalise your experience

Get the latest insights relevant to your sector.

Blog · 13 Jan 2020

How to unlock the potential of post-trade data

Could data standardisation help solve wider industry issues?

Yousaf Hafeez
Head of Business Development for Financial Solutions

To understand the benefits of standardisation, you don’t need to look much further than the example set by Henry Ford.

While he may not have invented the modern assembly line, Ford pioneered the use of standardised production methods to make his eponymous cars, reducing errors and lowering costs. The end result was a car that, for the first time in history, was affordable to the average person.

The value of data

In financial services, the most valuable commodity we have at the moment is arguably data. Yet while data is often claimed to be the ‘new oil’ or the ‘new gold’, the cost of compiling, processing, analysing and making use of this data continues to be eye-wateringly high.

The amount of post-trade data which institutions and financial firms now need to produce to satisfy regulatory reporting requirements has also increased significantly and is likely to continue.

So how can you make better, more efficient, use of the data feeds available?

Increasing data transparency and regulation requirements

The answer may actually lie in the original aims of the G20 post-crisis financial reforms. Many of these are well known, such as increasing transparency and preventing market abuse, but a further aim was to standardise the derivatives markets. But new requirements around trade reporting, new margin rules and a raft of new regulations, including MiFID II, SFTR, CSDR and FINRACAT, have put the focus on standardisation on the back burner.

This means the post-trade data necessary to achieve the required levels of transparency continues to be fragmented. The way data is handled, managed and understood can vary widely by firm, institution or by even by exchange. Different trading venues are likely to each use their own systems when it comes to managing data, but being able to compare data sets across the board and look for meaningful market trends is greatly impaired as a result.

We’re stronger together

Yet change may now be on the horizon. For example, ISDA had already created the Standard Initial Margin Model (SIMM) which prompted a successful post-trade collaboration of leading financial service firms globally, built upon an existing network developed by AcadiaSoft. The banks collectively recognised that rather than each trying to build something unilaterally, they’d collectively benefit from building something centrally.

ISDA is also leading the way with the creation of its Common Domain Model (CDM) 2.0, which it believes will promote transparency and alignment between regulators and market participants. The ISDA CDM is an innovative move to build standard representations of not only the data that exists today, but which will lead to continuous standardisation and processing of that data.

This joint model is an ideal industry solution, which could be applied to a vast number of both existing rules and those that are looming on the horizon. The market needs to work together to minimise disruption created by regulatory change, which in itself can be handled in manageable stages rather than as one ‘big bang’.

The demand for standardised data

Demand for standardised data is also on the rise outside of these regulatory use cases. The financial industry is now embracing the rise of cutting-edge artificial intelligence (AI) and machine learning (ML) tools which will help automation in their day-to day business, resulting in reduced costs and improved efficiencies. But to realise their full potential and achieve the desired results, these tools need significant volumes of standardised data feeds.

Perhaps one of the most reliable ways for industry participants to make sure they have access to useable, standardised data is to partner with an experienced third-party infrastructure provider.

We offer an unrivalled global infrastructure which enables our clients to access many of the reporting and data services they require, in the fastest and most cost-effective manner possible.

In addition, we also support and provide access to data feeds from exchanges, trading venues and clearing organisations around the world. This lets our clients analyse and understand the available post-trade data and to extract invaluable trading and business intelligence.

These feeds can also be used to augment other data feeds, both historical and in real-time. Our recent partnership with Euromoney TRADEDATA for example, has enabled the industry to access clean, standardised reference data through our global network in order to help firms remain compliant, as well as improve processing efficiencies across the trade lifecycle.

Rethinking our financial lifeblood: data

As data is arguably the lifeblood of the financial markets, it’s now time to collectively rethink how this is managed, accessed and understood. To fully exploit the potential of the enormous volumes of post-trade data, the industry would do well to consider Ford and his success in revolutionising the manufacturing and the end cost to the consumer.

The data and its feeds are already being produced, it’s now a matter of looking at how this can be produced in a more efficient way. By making a few incremental, but deliberate, steps towards greater standardisation in how data is stored and understood, institutions and firms across the globe can unlock the unrealised potential of this valuable commodity and reap the rewards. 

To understand the benefits of standardisation, you don’t need to look much further than the example set by Henry Ford.

While he may not have invented the modern assembly line, Ford pioneered the use of standardised production methods to make his eponymous cars, reducing errors and lowering costs. The end result was a car that, for the first time in history, was affordable to the average person.

The value of data

In financial services, the most valuable commodity we have at the moment is arguably data. Yet while data is often claimed to be the ‘new oil’ or the ‘new gold’, the cost of compiling, processing, analysing and making use of this data continues to be eye-wateringly high.

The amount of post-trade data which institutions and financial firms now need to produce to satisfy regulatory reporting requirements has also increased significantly and is likely to continue.

So how can you make better, more efficient, use of the data feeds available?

Increasing data transparency and regulation requirements

The answer may actually lie in the original aims of the G20 post-crisis financial reforms. Many of these are well known, such as increasing transparency and preventing market abuse, but a further aim was to standardise the derivatives markets. But new requirements around trade reporting, new margin rules and a raft of new regulations, including MiFID II, SFTR, CSDR and FINRACAT, have put the focus on standardisation on the back burner.

This means the post-trade data necessary to achieve the required levels of transparency continues to be fragmented. The way data is handled, managed and understood can vary widely by firm, institution or by even by exchange. Different trading venues are likely to each use their own systems when it comes to managing data, but being able to compare data sets across the board and look for meaningful market trends is greatly impaired as a result.

We’re stronger together

Yet change may now be on the horizon. For example, ISDA had already created the Standard Initial Margin Model (SIMM) which prompted a successful post-trade collaboration of leading financial service firms globally, built upon an existing network developed by AcadiaSoft. The banks collectively recognised that rather than each trying to build something unilaterally, they’d collectively benefit from building something centrally.

ISDA is also leading the way with the creation of its Common Domain Model (CDM) 2.0, which it believes will promote transparency and alignment between regulators and market participants. The ISDA CDM is an innovative move to build standard representations of not only the data that exists today, but which will lead to continuous standardisation and processing of that data.

This joint model is an ideal industry solution, which could be applied to a vast number of both existing rules and those that are looming on the horizon. The market needs to work together to minimise disruption created by regulatory change, which in itself can be handled in manageable stages rather than as one ‘big bang’.

The demand for standardised data

Demand for standardised data is also on the rise outside of these regulatory use cases. The financial industry is now embracing the rise of cutting-edge artificial intelligence (AI) and machine learning (ML) tools which will help automation in their day-to day business, resulting in reduced costs and improved efficiencies. But to realise their full potential and achieve the desired results, these tools need significant volumes of standardised data feeds.

Perhaps one of the most reliable ways for industry participants to make sure they have access to useable, standardised data is to partner with an experienced third-party infrastructure provider.

We offer an unrivalled global infrastructure which enables our clients to access many of the reporting and data services they require, in the fastest and most cost-effective manner possible.

In addition, we also support and provide access to data feeds from exchanges, trading venues and clearing organisations around the world. This lets our clients analyse and understand the available post-trade data and to extract invaluable trading and business intelligence.

These feeds can also be used to augment other data feeds, both historical and in real-time. Our recent partnership with Euromoney TRADEDATA for example, has enabled the industry to access clean, standardised reference data through our global network in order to help firms remain compliant, as well as improve processing efficiencies across the trade lifecycle.

Rethinking our financial lifeblood: data

As data is arguably the lifeblood of the financial markets, it’s now time to collectively rethink how this is managed, accessed and understood. To fully exploit the potential of the enormous volumes of post-trade data, the industry would do well to consider Ford and his success in revolutionising the manufacturing and the end cost to the consumer.

The data and its feeds are already being produced, it’s now a matter of looking at how this can be produced in a more efficient way. By making a few incremental, but deliberate, steps towards greater standardisation in how data is stored and understood, institutions and firms across the globe can unlock the unrealised potential of this valuable commodity and reap the rewards. 

Contact