TCA FOR FIXED INCOME – REALLY?
Adoption of TCA for fixed income is showing signs of acceleration, however approaches and opinions on best practice differ greatly from firm to firm. Mike Googe, Global Head of Post-Trade Transaction Cost Analysis at Bloomberg, considers the key challenges and factors in implementing an efficient and effective fixed income TCA policy.
There are people who question whether it is possible to gain any kind of value or insight by conducting TCA for fixed income (FI). While there are limitations to FI TCA, it can and indeed does deliver value, and more and more firms are starting to realise that. At the annual Bloomberg TCA conference in 2013, our interactive survey showed 38% of respondents would like to implement FI TCA and another 30% are looking at a full multi-asset implementation. In 2015 Greenwich Associates published a paper which supported this observation. Their survey revealed use of FI TCA has grown from 19% of firms in 2012 to 38% in 2014.
Interestingly, the reasons put forward to explain why fixed income TCA can’t be relied upon – quality of pricing data, lack of a complete volume picture, market trading practices, the liquidity crisis – are issues which also affect the quality of analysis in equities markets where TCA is a firmly-established practice. Yes, equity markets have published market data, but given the imbalance of HFT to real money flows, fragmentation of liquidity across lit and dark venues and the disparity in quality of pricing data between the most liquid large caps and small and micro caps, accurately measuring your trading performance is still very challenging.
The challenges of conducting TCA for fixed income can be overcome. What is imperative is that firms clearly define a TCA policy that meets their objectives, taking into account the limitations of the analysis and using it in conjunction with good judgement.
Why conduct FI TCA?
The primary goal cited by market participants is generally to gain insight in order to improve their trading and investment processes, and be able to demonstrate control and resultant benefits in terms of performance. According to the Greenwich Associates survey, 64% of respondents said ‘post-trade review to analyse trade effectiveness’ was their main use of the analysis.
Repurposing TCA for compliance surveillance is also gaining traction. Having an independent analysis of performance allows firms to implement a defendable policy for the identification of potential market abuse, conflicts of interest and suspicious transaction breeches.
Last but not least, the quality of analysis, and thus the value of conducting FI TCA, is set to improve significantly with initiatives like MiFID II in Europe, which will lead to increased pre and post-trade transparency on fixed income markets.
Key challenges
By considering the following questions you can begin to address the issues surrounding pricing data quality, cost decomposition, benchmark selection and aggregating results.
What do you want to measure and why?
This might include which trade types or flows to include, which part of the order lifecycle to consider and are you doing this to detect outliers or trends or compliance breeches etc.
How do you want to measure and what effects are you trying to observe?
This includes benchmark selection, which pricing data to measure against, which contextual data to incorporate, such as momentum or volatility, in order to isolate insights such as which dealers perform best under what conditions or how best to reduce impact etc.
When do you want to measure and how do you want to use the results?
Select a frequency of analysis to give you the best insight and the best output for the various types of internal and external consumers of analysis, e.g. summary vs detailed reports, charts and visual representations, exceptions or alerts or even a feed to contribute to pre-trade decision support tools.
Pricing data
In the absence of a continuous stream of published tick data it is impossible to achieve a complete picture of pricing and volume for a given bond. At some point in the future regulation may mandate publication, or even create the conditions for a central order book, but there are many challenges to overcome before anything like that can be considered.
Currently pricing data consists of contributed indicative pricing, firm pricing, quotes, evaluated pricing and to an extent runs and axes. The higher the liquidity of the issue, the more reliable is the pricing data. Of course purely indicative prices can be skewed, but in liquid names where multiple contributors are in a competitive environment the onus is to deliver accurate prices, and many participants view these as a good proxy for a continuous tick data set. Contributions can also be made by exchanges and reporting agencies such as TRACE (the Trade Reporting and Compliance Engine). As markets become more electronic, firm prices further increase the quality of pricing data and time-stamps, as will initiatives to enhance pre-trade transparency.
As liquidity deteriorates, so does the quality of contributed data. If it falls below a given threshold it is time to consider using evaluated prices, which are based on a variety of direct and indirect observations and are often provided with a ‘confidence score’, allowing consumers to determine to what extent they want to rely on them. In this case, it is fair to say that any TCA will not be useful from a trading insight point of view, however it can be used for trading surveillance for the purpose of compliance testing. This introduces the idea of contingent benchmarking, e.g. selecting the benchmark to use based on the characteristics of the bond or indeed the order.
Cost decomposition
One of the critical insights TCA strives to deliver is an understanding of where costs occur during the lifecycle of a trade. Being able to understand the opportunity cost between the various stages can help identify areas of best or poor practice and assist in changes of procedure as well as strategy selection or timing. Being able to answer what happened to the price between portfolio manager decision, order creation, RFQ going out, quotes back, execution time etc. helps provide these insights.
With a highly-automated electronic workflow providing accurate timestamps, equities can be reliably measured. But with a high proportion of fixed income flow conducted over voice, the quality of decomposition is seriously eroded. In this case the systems and workflow of the firm must be considered when looking at what insight can be gained from any decomposition. In the case of a high proportion of manual workflow, benchmark selection is critical. For example, measuring an observed price when requesting quotes, and comparing it to the price when a quote is accepted can provide insight on price sensitivity. When aggregated and compared, this can indicate areas where more or less caution in timing or better dealer selection is required.
Benchmarks
The next factor to consider, once you have the underlying pricing data in place, is what ‘measuring sticks’ you want to use. Benchmarks use different approaches to detect specific effects and broadly fall into two categories: absolute and relative.
Of course making comparisons for benchmarking requires matching conventions. Prices should be against prices, be they clean or dirty, yields against yields, but in all cases normalised. Conducting TCA based on spread is fraught with difficulty because of the changing nature of the underlying benchmark, curve or interpolation method from firm-to-firm.
- Absolute benchmarks
Absolute benchmarks are calculated by comparing raw pricing data with transactional data. The most prevalent is implementation shortfall, or arrival price, which simply compares the price at a given action point in the order lifecycle with the achieved execution price. This price is taken as the observed mid-price. You can then compare the observed price when the PM made the decision, or when the order was created, or when the trader sent the RFQ or executed against the execution price, and calculate the slippage.
This can be used to calculate the opportunity cost in a decomposition model: what was the price when the order landed on the traders pad compared to when they sent the RFQ out? Did the price move favourably or adversely? When you aggregate the results over a period of time, trends can be detected (and even more so when put in context of the order in hand) to determine if the effect is peculiar to one type of bond over another, for example.
- Near/far touch benchmarks
A variation of this is the near/far touch benchmarks, which operate in a similar way, but depending on the side of the order they use the bid or ask price instead of taking the mid-point. For example a buy order benchmarked to the far touch will use the offer price in its calculation. The benefit of this approach is that it allows the cost of crossing the spread to be factored into the performance. Whilst it only looks at the arrival time, it allows seeing how much spread was captured. A natural evolution to this will be to take the average spread from the arrival to the final execution time, but this is a more complex operation.
- Time Weighted Average Price
Volume Weighted Average Price benchmarks are challenging to deliver due to the absence of a complete volume picture. Different platforms or dealers could provide something, but to avoid the pitfalls of self-fulfilling prophecies this is realistic only for very liquid bonds. Future post-trade transparency initiatives might allow this, but for now you can consider the alternative of Time Weighted Average Price (TWAP). This requires aggregating prices into time intervals where each interval has an equal weighting. A TWAP can then be constructed over a full day’s trading or a given interval, for example the order interval or the interval from the order arrival to the end of trading day. Again this is most effective at the higher end of the liquidity spectrum.
- Point in time benchmarks
Point in time benchmarks allow measurements against discrete times not related to the lifecycle of an order, for example a given prior or post days close. These benchmarks can be used in tandem to create a momentum profile less centred on the trading process but more looking at the timing of trades. When aggregated by portfolio manager or fund for example, this can reveal momentum bias and whether the trading approach can afford to be more passive or aggressive.
At a lower level of resolution, time offset or reversion benchmarks will take snap prices at a much shorter interval, usually measured in minutes or even seconds, around a given action (e.g. arrival time or execution time). These work in a similar way to point in time but look to detect short term momentum or impact, which can again be aggregated by order characteristics to see which orders or bonds etc. are more sensitive than others.
- Relative benchmarks
Relative benchmarks help to calculate costs within some sort of context. There are two main approaches: using peer performance to see how your trading compares to others who have conducted similar business, and using pre-trade cost estimates which calculate a likely cost of execution given the characteristics of the order in hand.
Creating a meaningful set of peer benchmarks is a balancing act between measuring into such detail that you risk leaking information about contributors’ trading activity against aggregating results to such a high level that the benchmark becomes meaningless. The balance is to use 2-3 levels of aggregation to get a more relevant result, e.g. what is peer arrival when trading French sovereign debt in 7-10 year maturity or corporate debt of high order difficulty during adverse momentum?
In addition, it is imperative that the sample of peer results has thresholds for minimum acceptable observations, for example specifying a minimum number of relevant orders from a minimum number of contributing firms (excluding yourself to avoid material skew). If these criteria are not met then no result should be generated to avoid misleading observations. A peer equivalent of any of the absolute benchmarks will allow relative comparison of performance, but typically they tend not to include point in time benchmarks.
Pre-trade impact estimates in fixed income are beginning to emerge. Through approaches such as cluster analysis, you can leverage scarce transaction data observations while considering a large number of relevant factors that can influence the liquidity of a particular security. In other words, big data offers promising methods to estimate impact cost and time to execute for a given volume, even for bonds with limited historical trading activity.
This is a sophisticated way of doing benchmarking which allows using market impact models similar to those well proven for equities and provides a natural way to extend coverage for TCA in fixed income markets. Again, to enhance transparency, results can be caveated with an uncertainty or confidence score.
- RFQ benchmark tests
Finally, Request For Quote (RFQ) trading allows for another set of benchmark tests to be conducted. These have tended to fall into the category of best execution. A firm might go out to several dealers requesting quotes and will typically select the best quote to trade before comparing it to the next best quote. This measure is very simple to capture and is known as the cover, which gives the price improvement achieved.
There is now a growing interest in capturing all returned quotes, including rejected quotes and abandons, and comparing them to the traded price to calculate the opportunity cost when you didn’t trade with a certain dealer. Further metrics such as hit ratio (how often you trade with a given dealer within a given context) or average time to respond to a request can provide further context to the analysis.
Aggregating results
Typical aggregations are group results based on prevailing market conditions (e.g. volatility, momentum etc.), order characteristics (e.g. maturity, security type, country, sector, rating etc.), entity (e.g. dealer, trader, PM or account etc.) or time period (e.g. day, week etc.).
One of the most important goals of TCA is to group according to the difficulty of completing an order. In equities the standard proxy is order size/average daily volume, but the absence of a full volume picture in bonds makes a similar approach unreliable. Compounding the challenge is the fact that as bonds mature their liquidity profile typically deteriorates.
An alternative methodology is to attribute a notional value to a level of difficulty. This doesn’t, however, reflect the wide variety of liquidity profiles a firm trades. Trading up to $5 million of the current US Treasury 10 year is different from trading up to $5 million of an off the run corporate bond for example. Further elements need to be considered, e.g. the security type.
The confidence index score given by evaluated prices can also be used to map difficulty. Bonds tend to have high confidence scores when many reliable direct observations are made, indicating higher liquidity and vice versa. A more simplistic approach could be to look at the size of the order relative to the issued size or available float, but this is constrained when you factor in the difficulty in assessing how much of an issue has been locked up and can no longer be considered accessible liquidity.
Another important step to produce a more accurate analysis is to ensure that things like flow type or semantics are captured. For example repos should be identified so they can be grouped separately to look at the overall performance, to avoid looking at individual legs, which can introduce contingent skew under some other aggregations (e.g. side).
Capturing semantic elements is a lot more difficult and relies on the capabilities of the feed system, but can provide meaningful insight. For example, an order instruction which seeks to get executed ASAP would typically attract a less favourable cost profile vs. one where the instruction is full discretion. If this data can be captured with syntactic consistency it can be a very insightful aggregation.
Turning analysis into action
Typically, TCA takes the form of reports looking to answer one or several questions concerning best/worst trades, dealer performance etc. However these require someone to review the analysis and pull whatever insight they can from them. With time pressure and firms seeking to increase the tempo of their decision/action cycle, TCA increasingly focuses on determining outliers on the basis of thresholds and tolerance, pushing results to the most appropriate user and feeding them into decision support tools.
Integrating trends or aggregate results for decision support is an area of increasing interest in all asset classes. Having an arbitrary ‘show me my best 5 and worst 5 trades’ on a given day doesn’t take into account the quality threshold which all the benchmarking discussed previously delivers. Increasingly, firms are looking to set acceptable tolerances on performance for a given benchmark for a given set of order characteristics. This means you don’t have to rely on a single quality line for all your flow.
Within fixed income the area of dealer selection is a focus. Using some of the previously described benchmarks it is possible to say for a given order profile which dealers are the most likely to respond to your RFQ, which are likely to give you the most competitive prices etc. The result of this can be a probability of execution score or list of ranked dealers to go to. If presented within the EMS or even as a set of options when creating the ticket or RFQ, the trader can make an informed decision with insight directly at their fingertips.
Again, using groupings, a user can say ‘if I have an easy US treasury order which slips by more than 5 bps against arrival then tell me’. The result is that users only get alerted to those items that they have specifically flagged, thereby reducing the incidents of false positives.
This is of particular importance for the growing community of compliance users who are moving away from random sampling to a more defendable surveillance policy which tests every trade and presents only those which require investigation.
Really? Yes!
Fixed Income TCA is still emerging but is evolving and being adopted at an increasing rate. The crux is not to view TCA as a ‘one size fits all’ activity but to consider it in light of the unique attributes of your firm and the variety of flow types traded. With this in mind you can establish a TCA policy which gives you the insights that can deliver real value add to your business.
[divider_line]©BestExecution 2015
[divider_to_top]