SYS-CON MEDIA Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, William Schmarzo

Blog Feed Post

Building a Back Testing Platform for Algorithmic Trading

In this series, I’m going to outline in general, how to build a back-testing platform for the creation, tweaking, and subsequent execution of algorithms used in electronic trading.

Part One – The Data

I recently made some comments on Vertica’s blog in regards to what I considered to be a fairly bold claim.  They said that Vertica was the only real column store.  But even if they are, so what? In my comments, I alluded to my belief that we optimize problems to solutions – we try to fix stuff using what we’ve got in our toolbox without having to run to Home Depot.

The real test is when the rubber hits the road – how do you actually solve a problem in a new way that’s motivating.  And by motivating I mean the solution addresses the issues, enables new capabilities, and is economically attractive.

So rather than tell you that DarkStar and our approach to processing both real-time and historical data (there’s a difference?) is the Real Enchilada, I thought I would illustrate a real world use case.

Let’s say you want to store a bunch of market data.  And I mean a bunch.  You want to store every piece of market data for the whole US Equities market.

And you’d like to have this data so that you can run analytics on it.  Or maybe even back-test strategies for buying and selling stocks.  So let’s assume that you’ve got some java code lying around to do that.

For our example, we are interested in seeing whether or not using volume average weighted price strategies actually work.   In our example, we will pretend that we are buying a lot of stock, and the theory we want test is whether or not buying that stock during the day when it’s lower than it’s weighted average price will give us a better average price during the day than just going with the flow (often referred to as volume participation).

We are all familiar with how relational databases work, and anyone who’s been in capital markets for a while knows how futile it would be to use something like Oracle for this due to cost, hardware and just how difficult it is to get the data into the database in the first place,

Oh that’s right, I forgot to tell you, we are going to have to load this data first.

I am not going to go into the relevant benefits of a column store here either, you can check out many other websites for that.

Instead, let’s look at some issues.  First, I would rather load the data directly into the database as it happens.  Staging the data separately is costly and error prone. In addition, what happens when you decide to load that data and encounter a problem that can’t be fixed in time for the next market day?  What if you actually run out of space or compute to get caught up?  Well then you can’t back test the next day to further refine your algorithms.  Algo’s should be tweaked every day.  New algo’s need to be developed to remain competitive.  So here, a database error costs real money.

So I need a fast data store.

As I am loading the data, what happens if one of my disks goes boom? Or one of my machines go boom? Well, now I have a problem.  If I fail over to another datacenter, how do i reconcile? What a nightmare!

So I need a data store that we can take a sledgehammer to and it will keep running.

Hey, if I have this big historical data store, I still need to query it while it is being updated.  Ideally, I would like to also be running analysis and back testing during the day.  Scheduling jobs to run at night is so very ’90′s.

So my data store has to facilitate both interactive query and batch analysis.

But wait, doing all of this means that I am going to have to figure out how to use the same code for back-testing that i use to generate orders during market hours.  It’s either that or use some visual, script based or different harnesses for my java or C++ code.  Yet another nightmare.

So, I would Iike to run the same code against my historical data store that I also use to generate orders during the day.

There’s a bunch of other stuff too, management, instrumentation, removing old data that I don’t need for back-testing, all the stuff we associate with normal day to day big data operations. We need to know what’s going on during the day so that we can be proactive.  There’s gold in that data!

And one last thing, it would be really cool if most of this technology wasn’t proprietary.  I mean let’s face it, firms that talk more about their investors on their websites than their clients can’t possibly have my best interests at heart.

This is a tall list.  Let’s knock it down, one by one.

Here is a diagram for your consideration.

The diagram isn’t very technical, and that’s on purpose – I’m outlining an algorithm, or methodology that may or may not solve our problem.

In the diagram, I’ve depicted the database as a cluster of machines.  Instead of using one big machine backed by a SAN, I’m going to use a number of machines.  Each of those machines is going to connect to the Market Data source and get data.

As we receive the data, we’re going to take a peek at it, and determine where in the cluster that data needs to live and while we’re doing that, we’re going to right it to disk.  A background process will make sure that the data ends up on the node we want it on.  More of why that’s so incredibly important in Part Two – Analyzing the Data in this series.

Also, I’m going to ask the cluster to replicate everything we’re writing to it – we’re going to end up writing the data a total of 2 times in this example.  I might usually suggest 3, but we’ve got two data centers running the same solution, so I’ll actually have 4 copies of the data.

Why write the data to three nodes in the cluster?  First, if a node goes down, I still want to be able to write data.  If the node that goes down is the primary node, I’m going to remember that and when that node comes back up, I’m going to write all the data to it as part of its “Welcome Back to the Cluster Party!”  And second, if I’m reading data from the cluster (remember, we’ve got algo’s running and users querying this data), I want my data.  If a node goes down, your users don’t really care – they just want their data.  By replicating the data across multiple nodes, I achieve high availability without having to fail-over to another instance or data center.

Ok, we’ve got the Sledge Hammer test handled, which is cool, but everything I’ve described above sounds like it’s going to take a lot of time and that the system is going to very slow.

Not true.  Each node in the cluster above is subscribing to market data.  So if one machine can ingest X messages per second, then a cluster of 10 machines should be able to ingest 10 * X messages per second, right?  Let see what that means in a real world example:

On May 20, 2010, there were about 1.1 billion BBO messages as published via SuperFeed (NYSE’s market data platform), those quotes represent the best bid, bid size, offer, and offer size for each stock at any given time.  In terms of messages per second, that’s about 50,000.  In terms of size per second, that’s about 4,500k per second.  Hmmm, chunky!

These are intimidating numbers.  But if we divide the problem up a bit, and use 10 nodes in a cluster, each node only needs to ingest about 450k per second across 5,000 messages.  All of a sudden, we’re dealing with something quite reasonable.

So, now we’ve got a cluster that can load the entire market real time and it’s redundant.  What about analyzing the data?  That’s in Part Two – Analyzing the Data which I’ll post next week.

Print

Read the original blog entry...

More Stories By Colin Clark

Colin Clark is the CTO for Cloud Event Processing, Inc. and is widely regarded as a thought leader and pioneer in both Complex Event Processing and its application within Capital Markets.

Follow Colin on Twitter at http:\\twitter.com\EventCloudPro to learn more about cloud based event processing using map/reduce, complex event processing, and event driven pattern matching agents. You can also send topic suggestions or questions to [email protected]

Latest Stories
Despite being the market leader, we recognized the need to transform and reinvent our business at Dynatrace, before someone else disrupted the market. Over the course of three years, we changed everything - our technology, our culture and our brand image. In this session we'll discuss how we navigated through our own innovator's dilemma, and share takeaways from our experience that you can apply to your own organization.
Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine ...
Intel is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world's second largest and second highest valued semiconductor chip maker based on revenue after being overtaken by Samsung, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactu...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve fu...
Wasabi is the hot cloud storage company delivering low-cost, fast, and reliable cloud storage. Wasabi is 80% cheaper and 6x faster than Amazon S3, with 100% data immutability protection and no data egress fees. Created by Carbonite co-founders and cloud storage pioneers David Friend and Jeff Flowers, Wasabi is on a mission to commoditize the storage industry. Wasabi is a privately held company based in Boston, MA. Follow and connect with Wasabi on Twitter, Facebook, Instagram and the Wasabi blog...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
The dream is universal: heuristic driven, global business operations without interruption so that nobody has to wake up at 4am to solve a problem. Building upon Nutanix Acropolis software defined storage, virtualization, and networking platform, Mark will demonstrate business lifecycle automation with freedom of choice and consumption models. Hybrid cloud applications and operations are controllable by the Nutanix Prism control plane with Calm automation, which can weave together the following: ...
Inzata is a powerful, revolutionary data analytics platform for integrating, exploring, and analyzing data of any kind, from any source, at massive scale. Powerful AI-assisted Modeling and a patented analytics engine help users quickly load, blend and model raw and unstructured data into powerful enterprise data models, actionable real-time analytics and engaging visualizations. Go beyond spreadsheets and slides and compose a powerful narrative about how your business is performing, and how y...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO Silicon Valley 2019 will cover all of these tools, with the most comprehensive program and with 222 rockstar speakers throughout our industry presenting 22 Keynotes and General Sessions, 250 Breakout Sessions along 10 Tracks, as well as our signature Power Panels. Our Expo Floor will bring together the leading global 200 companies throughout the world of Cloud Computing, DevOps, IoT, Smart Cities, FinTech, Digital Transformation, and all they entail. As ...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
Only Adobe gives everyone - from emerging artists to global brands - everything they need to design and deliver exceptional digital experiences. Adobe Systems Incorporated develops, markets, and supports computer software products and technologies. The Company's products allow users to express and use information across all print and electronic media. The Company's Digital Media segment provides tools and solutions that enable individuals, small and medium businesses and enterprises to cre...
In today's always-on world, customer expectations have changed. Competitive differentiation is delivered through rapid software innovations, the ability to respond to issues quickly and by releasing high-quality code with minimal interruptions. DevOps isn't some far off goal; it's methodologies and practices are a response to this demand. The demand to go faster. The demand for more uptime. The demand to innovate. In this keynote, we will cover the Nutanix Developer Stack. Built from the foundat...
Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes. We are offering early bird savings...
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.