Corwin Yu, Director of Trading at PhaseCapital, sits down with FIXGlobal to discuss his trading architecture, the proliferation of Complex Event Processing (CEP) and why he would rather his brokers just not call.
FIXGlobal: What instruments does your system cover?
Corwin Yu: At the moment, we trade the S&P 500, and we have expanded that to include the Russell 2000, although not as an individual instrument, but as an index. We also trade the E-Mini futures on the Russell 2000 and also for the S&P500. We have done some investigation on doing the same exact type of trading with Treasuries using the TIPS indices and the TIPS ETFs and a few of the similar futures regarding those as well. We are not looking at expanding the equity side except to consider adding ETFs, indices, or futures of indices.
FG: Anything you would not add to your list?
CY: We gravitate to liquid items with substantial historical market data because we really do not enter into a particular trading strategy unless there is market data to do sufficient back testing. Equities was a great fit because it has history behind it and great technology for market data, likewise for futures, where market data coverage has recently expanded. Options is a possibility, but the other asset class that is liquid but not a good fit is commodities. We shy away from emerging markets that are not completely electronic and do not have good market data. While we have not made moves into the emerging markets, we know that some other systematic traders have found opportunities there.
FG: How much of your architecture is original and how often do you review it for upgrades?
CY: In terms of hardware, we maintain a two year end-of-life cycle, so whatever we have that is two years old, we retire to the back-test pool and purchase new hardware. We are just past the four year mark right now, so we have been through two hardware migrations. Usually this process is a wakeup call as to how technology has changed. When we bought our first servers, they were expensive four-core machines with a maximum memory of 64 GB. We just bought another system that can handle 256 GB through six-core processors. We are researching a one year end-of-life cycle because two years was a big leap in terms of technology and we could have leveraged some of that a year ago.
In terms of technology, the software and architecture around the platform is completely different. When we first started, we built prototypes that were back-testable and able to migrate into trading strategies directly. In a lot of our strategies, however, we were surprised at how different market data was when we moved from a historical environment to a live environment. Architecture has changed immensely in the last few years, especially when we switch asset classes and add exchanges. When we first started, most of the strategies and architectures between counterparties were a blue sky implementation, yet the system we put into production and what we have now are completely different.
FG: How have regulatory changes affected you?
CY: When you do a ten-year back-test, you do not factor in a lot of the compliance changes from year to year. An easy example is the Reg SHO short sell compliance rules which have evolved in the last few years. When we first started, we knew about Reg SHO and factored it in, but obviously that changed again in the last year. The strategies themselves did change a little, but Reg SHO mostly affected our models in terms of factoring in the locates, the liquidity of locates from broker dealers, sourcing stock loans, the rates for the pricing model, transaction costs, etc. Obviously you cannot do that end in the back-test or it would take years to run.
In terms of other compliance changes, the only changes in futures strategies were in terms of the margin requirements, leverage and the loss or exchange of credit. All our models run on one-time or two-times leverage to be as conservative as possible, and we try to build our new models assuming relatively low leverage.
FG: Has your latency been affected by adding new compliance or risk layers?
CY: We use Lime Brokerage for their risk layers, which are surprisingly fast for us. We have not made many changes in the risk layer other than to change the models to incorporate fat finger checks, buying power checks and short sell checks. Although it slows us down slightly, we have also established a redundant check on our side.
We found that the broker-level checks worked for compliance, but they were not detailed enough for a strategy-by-strategy risk check. As an agency broker dealer, they are doing their best to cover themselves and their clients, but because their clients are broad, the actual compliance check is going to be equally broad. We insert our own risk checks, not to circumvent Lime, but as an additional layer. Investors want as many checks as possible. In this environment, it has always been ‘trust, but verify’, and if you do not or cannot really verify, you need to put your own check in.
FG: What is important to you in executing broker?
CY: We are really focused on the quality of the technology and the infrastructure, but we are not that concerned about customer service. We tell all our brokers the same thing, ‘if I never call you, that means you are doing a great job.’ I am not actually going to call someone to make a trade, so there is no point to picking up the phone. Even if something goes wrong, it is already too late to intervene. Depending on the issue, we will stop trading and reduce risk while we assess or trade through the problem and then go through a large endeavor to clean it all up. That is why we emphasize the technology to prevent a situation like that. We look for strong, redundant infrastructure, actual co-location, with the real cross-connected DMA.
A solid testing platform is also helpful, and good access to market data is an important requirement for us. Their latency model does not even have to be top of the line, but at least within our specs. It is a cold war now, everyone cannot be the fastest. For us the second and third fastest is acceptable as long as they have a strong infrastructure backing trading strategies we believe in. For instance, Lime’s co-location is actual co-location in the NJ2 data center, using the Savvis and direct Mahwah feeds. Many broker-dealers claim to have DMA, but actually offer a smart order router that mimics DMA. We are looking for a robust model and a certain pedigree of users on that infrastructure.
FG: Will CEP be widely adopted in the next few years?
CY: I think so, but it is harder to differentiate CEP from the algorithms. CEP is more of a technological train of thought that evolved from taking fundamental aspects of data handling into a platform. The idea of CEP has been around for a long time: you have a trigger for an event and it happens in a stream and it happens quickly. Although people have being doing this for many years, the advent of new information models thrust CEP into the limelight. While I am not sure if every firm will buy a CEP platform, they are more likely to change their platform to mimic CEP. CEP will not become a commodity; it will be a model for creating applications and designing interfaces. Traders who want quick time-to-market and have the economics and appetite for migration will probably buy it, but if we are looking at a shop that could adjust the development, they will probably opt to change their existing infrastructure to more resemble CEP.