Ultimately, pre-positioned technological adaptation alongside hard and soft skill percolation will ensure firms are best placed.
With Carole Comerton-Forde, Professor of Finance, University of Melbourne, and Stuart Baden Powell, Electronic Trading Expert
Stuart, what are some of the trends and changes you are observing?
Fragments of positivity started to return to global equity markets toward the end of 2023-early 2024. The much-discussed Fed pivot (with markets pricing in several rate cuts later in the year), inflation lowering and US consumer confidence looking solid all add impetus. That said, bonds and equities have remained broadly positively correlated, the traditional 60/40 allocation model had been shaken and diversification challenged. However, with the US magnificent seven continuing their momentum and rate cuts kicking in, global and Asian equities could be considerably more relevant in future portfolio construction.
In APAC equities we are seeing the unfolding of a phased alteration to both a new market structure and a new set of trading challenges. As transition to the future state occurs, a market structure 4.0 if you will, there is often the temptation for firms to place the proverbial head in the sand and simply react at completion of the final state.[1]Each to their own, but that will be too late. There is a clear interplay between market structure change, product, analysis and platform performance. Yet, success requirements are not only technological, rather both efficient technology and human skillsets are critical. Both take time to successfully implement and longer to fully optimise into top tier performance. Firms from both sides should ideally pre-position ahead of the arrival of the future state.
In Australia and Japan the purchase of Chi-X by American powerhouse, Cboe has bought a major global player into Asian cash equities. We have also seen the go-live of other potentially impactful innovations such as Instinet BlockCross, the addition of conditional orders on several platforms and the exciting development of the Korean ATS, Nextrade. Sell and buy-side execution decision making should get more complicated, however if unpacked and navigated correctly, that material complexity provides opportunities to improve Transaction Costs Analysis (TCA) performance and hopefully improve TER’s (Total Expense Ratios) for improved fund outcomes.
Carole, does the academic literature offer any insights into how these changes might affect traders?
The arrival of BIDS and BlockCross has focussed attention in Asia on dark trading and conditional order types. As more venues become available, order routing and exposure decisions have become increasingly important. Institutional investors slice large parent orders into child orders to try to minimise their trading footprint and information leakage in order to minimise price impact. Dark venues are an important tool in this process. Researchin European markets shows that that parent orders with a higher volume executed in dark venues are associated with lower execution costs.
However, the benefits of dark trading are not absolute. Dark trading venues have a lower probability of execution, so investors face a trade-off of greater access to liquidity vs lower information leakage and trading costs. Evidence from the US market shows there is a pecking order in venue selection. Low-cost, low immediacy venues such as mid-point dark pools are at the top of the pecking order, while high-cost, high immediacy venues such as lit exchanges are at the bottom. In the Asia-Pac context ASX Centre Point is an example of a low immediacy venue, and ASX Trade Match a high immediacy venue. However, shocks to the VIX or macro or firm-specific news events force investors down the pecking order as the demand for immediacy exceeds the desire for lower price impact.
Recent research from the Australian market shows that execution outcomes on dark venues vary with the design of the venue. Broker dark pools, unlike exchanges, are allowed to segment order flow either by excluding high frequency and/or proprietary traders or by allowing customers to choose to opt-out of interacting with this type of order flow. The research shows that venues that restrict the access of high frequency and proprietary traders offer better execution outcomes to investors, conditional on execution. Differences in execution outcomes are concentrated in smaller trades, which are more likely to have high frequency/proprietary counterparties. Although not considered by the research due to data limitations, it is likely that the use of Minimum Acceptable Quantities (MAQs), which can be used on both exchange and broker dark pools, will also improve execution outcomes. However, restricting counterparties and imposing minimum trade sizes reduces the volume of accessible liquidity. Therefore, investors should carefully evaluate the trade-offs when doing so.
Stuart, what does the trading desk need to do to adapt to both the market structure changes and the observations from the literature?
Performance research encourages us to look even deeper. Buyside rarely give out the full-size to a single electronic desk, so both single ticket and reload handling for those trade-offs genuinely matter. A broker hybrid lit and dark algo and/or Smart Order Router (SOR) could be simplistically calibrated. An algo could have a predictable percentage of volume profile such as using a well-known industry standard. An SOR that slices on a routinized venue schedule could also expose the order. In these scenarios, both single ticket and reloads would likely experience signal risk and correlated performance degradation. A possible counter is to seek more discretion, yet an increased ticket duration may also increase negative opportunity costs. In either case, more sophisticated algos and SOR should provide clients with the provision of choice to meet specific client requirements at different times.
Data driven optimisation can provide such choice. The electronic desk’s unique ability to run detailed analysis across multiple venues and potentially order types can often produce objective results that warrant technological builds and subsequent releases into production. However, uncovering such outcomes is not a sustainable certainty. Sometimes statistical relationships decouple, or historical quantitative research analysis fails to maintain future real-world relevance. As such, established investigative research processes should not be static yet rather must be dynamic and improve with the times. Ultimately analytics, automation, product and technology will all increase in importance, yet are not the only requirement.
This brings us to human skills, both individual and team.
In the future space both a broader and deeper range of skills coupled with improved context awareness in the statistical sense should prove useful to overmatch the requirements of increasingly demanding conditions and clients. How these skills are developed, organised and deployed by individuals in specific seats and then by whole teams will play an important role in the service levels each firm is able to bring to the table for each specific client. For example, any logical cross-pollination of quantitative analytical skills towards omni-skilled client facing staff is often reliant on strong internal technology operations and robust reliable data. Yet is just passing on a data output e.g. a score against a benchmark, to a client really improving the next iteration of client performance? It may, but more likely it will not. This is where context comes into play. The ability to both analyse larger data sets at the deep micro level yet also understand, assess and discuss the broader context of that data will be essential in the buy/sell side partnership relationship that is required in the complex and hyper competitive future state.[2]
Carole, can academia offer any support on the analytic front?
Yes, it can. The skills and analytics of individuals on the desk can be complemented and supported by rigorous academic research. While individual buy- and sell-side firms can undertake their own research and even conduct A/B tests for different types of trading strategies, few firms conduct sufficient volumes of trading for these tests to yield statistically significant results. Traders can harness the capacity of academic research by being willing to provide anonymised data to academic researchers. A coordinated approach involving multiple players sharing data minimises the risk of any individual firm revealing proprietary insights. A useful example of this type of approach can be seen in Beason and Wahal’s (2021) paper The Anatomy of Trading Algorithms. Examining four trading algorithms used by 961 institutions to send 169 million orders to the market, they offer new insights into how choices around the use of passive vs aggressive orders, resting times for orders and display choices impact individual child order price impacts and parent level costs. For example, they were able to quantify the cost of increasing the share of a parent order that was passive, and quantify the benefits of using day orders vs immediate or cancelled orders. Such insights are useful inputs into proprietary algorithms. There are many areas that could be usefully explored with this type of approach. For example, how should traders trade-offs the choice between increasing the size of MAQ orders to reduce information leakage vs reducing the probability of execution?
Academic research can also be a valuable tool in helping to inform regulatory and policy decisions. For example, researchers have rapidly produced empirical papers to better inform the Securities and Exchange Commissions proposals to reform retail trading in the US.[3] Collaborations between academia and buy- and sell-side firms can help to ensure better designed markets leading to better outcomes for investors.
Stuart, are data analytics enough or are there other things that trading desks need to do to be successful in the current or future market context?
It can be risky to assume that only the areas of data, analytical skills, predictive power and technological prowess in isolation are the skills required to succeed in one hundred percent of conditions of the future state. Whilst it is true that without the above analytics product performance will be severely challenged, softer skills such as execution consulting and a substantial range of interpersonal skills will remain of broad importance. As this exciting new phase of APAC Market Structure unfolds and equity volumes improve, ultimately, pre-positioned technological adaptation alongside hard and soft skill percolation will ensure firms are best placed for increased complexity and competition to improve TCA and TER outcomes.
[1] If we use Chi-X as a loose proxy, for change; we can see the period pre-arrival of Chi-X into Asia (pre-2011) as Market Structure 1.0. The period post Chi-X live, 2011-2023 could be broadly defined as Market Structure 2.0. 2.0 also includes but is not limited to, the growth of platforms such as Liquidnet, the then ITG POSIT and SBI Japannext. 3.0 from 2023 onwards can be characterised by new Cboe technologies go-lives, early-stage growth from Instinet Blockcross and Cboe BIDS, further expansion of conditional order types, broadly increased demand for dark trades, electronic blocks and enhanced small and mid-cap algorithmic execution quality. As this nascent state evolves, we move toward 4.0.
[2] For any sporting aficionados, think of unique multi or omni-skilled performers in team-sports such as Shohei Otani in baseball, Mia Hamm or Marta in football or MS Dhoni in cricket. Aside from being individually over-productive, multi-tooled team members provide substantial benefits to broader team output. A team wide example is the way rugby union teams have altered over time by blurring the demarcation of skills and roles between the backs and forwards through the cross-pollination of handling skills, mobility and speed for example.
[3] See for example Hendershott, Khan and Riordan (2023), Dyhrberg, Shkilko and Werner (2022) and Ernst, Spatt and Sun (2023).