By Peter Waters, Managing Editor, GlobalTrading.
On the 3rd December 2014 GlobalTrading hosted a roundtable discussion to examine one of the key challenges for market operators; how to better capture, analyse and use data to generate revenue.
Getting the data and viewing the data are two very different challenges, and both need addressing before a market operator can use the data both as a strategic asset for internal purposes and as a revenue generating asset.
Our roundtable, moderated by long-time market operator expert Ned Phillips, developed around numerous key themes; one of which is the idea that firms need to constantly adapt to the changing nature of data itself. Market operators need to be constantly incorporating new data sources as well as the existing types of data which can be structured, semi-structured and unstructured. And most importantly, all of this data needs to be captured and analysed in a comparable way.
A significant area for development, in Asia and globally, is the division between regulatory data and that which can be used and sold as a marketable commodity. The room discussed the consequences of the fact that in Asia especially, different regulatory regimes determine at what point data can be released and to whom, and so market operators need to be aware of their obligations. The other side of this debate is around data standards, and what standards, if any, exist in terms of capturing data, time-stamping, formatting, and storage. These questions need to be addressed before an exchange can fully appreciate just what freedom it has to monetise the data.
Data is one of the major areas where exchanges now realise they can generate increasing revenues. For firms looking to provide those services to exchanges, the key is to capture and analyse all of the available data, and let clients decide what is useful, rather than deciding for them. One differentiating factor to consider is the divide between institutional and retail clients, and the big difference in demands and needs that they have. The different exchanges present each offered their own views on their precise retail/institutional mix, and how it had affected their decisions.
Institutional clients are constantly searching to benchmark themselves against their peers and competitors, whereas retail defines their data needs differently, with the potential for data piracy being a concern. The levels of service and price sensitivity need to be flexible enough to allow each recipient of the data to tailor their services. However the point was made that simply by charging for a given asset, it ascribes value to it, so giving away certain datasets for free might not be the best use of that data.
The progression reflected in the way exchanges deal with data ties into the wider theme of exchanges adopting behaviour from the sell-side – a recurring theme throughout the roundtable. Exchanges are increasingly going to clients with products and looking to generate revenue more directly and actively. This trend will continue to grow as market operators realise that they need to be more proactive, both in offerings to clients, and reacting to client needs before those clients realise the need themselves. The holistic dataset that exchanges have, once properly analysed and visualised, can go a long way to drive sales.
The biggest subsequent question from market operators is then their own peer comparison; “What are the other regional exchanges doing?” which goes into how market operators also view themselves – what are their own core competencies and competitive advantages, and how can new datasets and visualisations maximise that? These questions are not simple, and each of the exchanges in the room had a slightly different interpretation of exactly what the best mix for their customers was, the common factor being using the data to better analyse the options available.