SYS-CON MEDIA Authors: Pat Romanski, Zakia Bouachraoui, Liz McMillan, Elizabeth White, Yeshim Deniz

Blog Feed Post

Cassandra’s Data Model

As we prepare to implement our Market Data repository to facilitate algo development and back-testing, you should have downloaded Cassandra and installed it by now.  What, you haven’t?  Well, click here, get it done and then come back for some fun.  To get things up and running once you’ve downloaded Cassandra, click here for some guidance (this assumes you’re running Linux but should point you in the right direction if you’re running Windoze).

CONFUSION

Most of the explanations I’ve read about Cassandra’s data model first extol the virtues of NoSQL and the evils of Relational Databases.  And so while getting the reader caught up in this mythic struggle that summons images from Tolkien’s middle earth, the point is lost.  And that point is?

IT’S ALL ACTUALLY QUITE EASY

Cassandra thinks about data the way we think about data.  Most of us think about data in rows and columns.  So does Cassandra.  But it also alleviates some extra stuff we don’t need while adding some stuff that we do need.  And that can be a little disconcerting initially.  To make things easier, let’s first describe a goal for our exercise.  We’d like to get a day’s worth of market data, by symbol, in ascending time order.  Also, we might like to get the data for a slice of time within that day.  Like, “give me all the BBO’s for American Airlines for May 20th, 2010,” or, “I’d like to see the BBO’s for American Airlines for May 20th, 2010 between 1 and 2pm.”  Let’s jump right in.

LET’S GET OUR DATA

As we subscribe to our favorite market data feed, we receive something like:

  • Symbol,
  • Bid,
  • Offer,
  • Bid Size,
  • Offer Size.
  • Time Stamp, and
  • Seq # (most quote vendors provide a Sequence # because multiple quotes can occur for any given Time Stamp)

We’re going to call this a column family.  Cassandra’s analog for a table is a Column Family.  You can see why this fits so well above – the columns that belong to the symbol AA comprise a family of related information.  I’d like to store this data by symbol, so later, I can retrieve it.  Using the Cassandra client (cassandra-cli – it’s in the bin directory where you installed Cassandra), let’s create the BBO Column Family.  It looks like this:

create column family bbo with comparator = UTF8Type
and column_metadata = [
{column_name: symbol, validation_class:UTF8Type},
{column_name: bb, validation_class: UTF8Type},
{column_name: bo, validation_class: UTF8Type},
{column_name: bbSize, validation_class: UTF8Type},
{column_name: bSize, validation_class: UTF8Type},
{column_name: timeStamp, validation_class: LongType},
{column_name: seqNum, validation_class: LongType},
];

And now that we’ve created the schema, let’s insert some quotes.

Set bbo[‘symbol’=’AA’;
Set bbo[‘bb’]=’123.34’;
Set bbo[‘bo’]=’123.84’;
Set bbo[‘bbSize’]=’100’;
Set bbo[‘boSize’]=’200’;
Set bbo[‘timeStamp’]=1234;

What happens when you execute a list bbo command now?  So, that’s easy enough.   So what happens as we get the next quote?  Well, we go to insert our data like this:

Set bbo[‘symbol’=’AA’;
Set bbo[‘bb’]=’125.34’;
Set bbo[‘bo’]=’125.84’;
Set bbo[‘bbSize’]=’100’;
Set bbo[‘boSize’]=’200’;
Set bbo[‘timeStamp’]=1235;

And then to see our data, enter this command (again):

List bbo;

When we use the ‘list bbo’ command, we’re only go see that data last inserted for that row key.  What happened to the previous data?  It was over-written with the new data.  So if we wanted to save each quote, we could combine the timestamp with the column name and then we’d be inserting unique columns each time and we’d be fine.  But there’s a different way to do this.

BIG DEAL, I DON’T SEE ANYTHING DIFFERENT HERE

And you don’t, because we haven’t started introducing the special sauce yet.  Well, we kind of did.  In the schema definitions above, you’ll notice we didn’t say that much about what we could or couldn’t insert into a row.  We just started adding columns dynamically.  So, each row, which is identified by a key, can have different columns in it and even a different number of columns.

WELL, THAT’S NOT GOING TO WORK

So, how do we keep track of all the quotes for our symbol?  First a little clarification, the Column Family above is really BBO, and we’ve inserted a row identified by the key, ‘AA” and some associated tag/data value pairs.  Think of this as a map of maps.  So now, we need to insert the bits that change for a given symbol over time.  How could we do that?  We create a Super Column Family of course.  A Super Column Family contains Super Columns.  A Super Column is kind of like another row of data – so using our example above, the Super Column we’ll be inserting consists of the BB, BO, BB Size, etc. The data above gets inserted using [AA] as our row key, and we need to pick a key for the Super Column that contains the quote data.    Let’s pick Seq# as our Super Column key.  Our row key is still Symbol, and I’ve prepended the date to it.  This way, all the data for a day’s worth of AA will be in the same row.  This is called a compound, or aggregate, key.  It looks like this:

create column family sbbo with column_type = 'Super' and comparator = ‘BytesType’
and column_metadata = [
{column_name: bb, validation_class: UTF8Type},
{column_name: bo, validation_class: UTF8Type},
{column_name: bbSize, validation_class: UTF8Type},
{column_name: bSize, validation_class: UTF8Type},
{column_name: timeStamp, validation_class: LongType},
];

And the insert statements look like this (we’re using the Seq# as the key – that’s the Super Column key right after the row key or, ’20100124:AA’ below):

Set sbbo[‘20100124:AA’][1234][‘bb’]=’100.00’;
Set sbbo[‘20100124:AA’][1234][‘bo’]=’101.00’;
Set sbbo[‘20100124:AA’][1235][‘bb’]=’101.00’;
Set sbbo[‘20100124:AA’][1235][‘bo’]=’102.00’;
Set sbbo[‘20100125:AA’][1234][‘bb’]=’100.00’;
Set sbbo[‘20100125:AA’][1234][‘bo’]=’101.00’;
Set sbbo[‘20100125:AA’][1235][‘bb’]=’101.00’;
Set sbbo[‘20100125:AA’][1235][‘bo’]=’102.00’;

Now let’s see what’s in the column family:

List sbbo;

So now it looks like we’re able to store a set of quotes for a symbol for any given day.  Bingo.

All we’ve really done here is add another map – so we now have a map (Date, Symbol) that contains a map (Symbol, Quote) that contains another map (Quote, QuoteField).  Or, what we’ve done is figured out a way to represent the potentially sparse fact tables resulting from large data analysis (OLAP) projects in a concise and easily addressable fashion.  Told you it wasn’t that hard.

GIVE ME MY DATA

So, now that we’ve inserted a couple of rows of data, let’s see how to get our data.  From above, we want to:

  1. Get all the data for a day’s worth of a symbol, and
  2. Get all the data for a slice of time during a day for a symbol

Assuming you’ve entered the statements above to insert the data, we can retrieve an entire day’s worth of AA with this simple statement:

List sbbo[‘20100124:AA’];

Now that we’ve gone over some of Cassandra’s basics, we’ll get a little more into it in upcoming posts.  That’s where we’ll cover the goal in #2.

THANKS FOR READING

PrintFriendly

Read the original blog entry...

More Stories By Colin Clark

Colin Clark is the CTO for Cloud Event Processing, Inc. and is widely regarded as a thought leader and pioneer in both Complex Event Processing and its application within Capital Markets.

Follow Colin on Twitter at http:\\twitter.com\EventCloudPro to learn more about cloud based event processing using map/reduce, complex event processing, and event driven pattern matching agents. You can also send topic suggestions or questions to [email protected]

Latest Stories
Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. It even comes with Prometheus to store the metrics for you and pre-built Grafana dashboards to show exactly what is important for your services - success rate, latency, and throughput. In this session, we'll explain what Linkerd provides for you, demo the installation of Linkerd on Kubernetes and debug a real world problem. We will also dig into what functionality you can build on ...
The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions. New research shows that delivering on multicloud e...
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
Take advantage of autoscaling, and high availability for Kubernetes with no worry about infrastructure. Be the Rockstar and avoid all the hurdles of deploying Kubernetes. So Why not take Heat and automate the setup of your Kubernetes cluster? Why not give project owners a Heat Stack to deploy Kubernetes whenever they want to? Hoping to share how anyone can use Heat to deploy Kubernetes on OpenStack and customize to their liking. This is a tried and true method that I've used on my OpenSta...
DevOps is a world surrounded by information, starting from a single commit and ending in roll out to production. In this talk, I'll introduce you to the world of Taboola DevOps data collection, to better understand what goes on under the hood. The system we've developed in-house helps us collect and analyse the entire DevOps process from the very first commit all the way to production. It provides us a full clear view with a drill-down toolset that helps keep us away from the dark side. ...
We at Capgemini have developed a cloud-native PaaS Solution called "Apollo". Apollo is built on top of following open source components. - Apache Mesos for cluster management, scheduling & resource isolation - Marathon or Kubernetes for Container orchestration - Docker for application container runtime, - Consul for service discovery via DNS - Weave for networking of Docker Containers - Traefik for application container load balancing
After years of investments and acquisitions, CloudBlue was created with the goal of building the world's only hyperscale digital platform with an increasingly infinite ecosystem and proven go-to-market services. The result? An unmatched platform that helps customers streamline cloud operations, save time and money, and revolutionize their businesses overnight. Today, the platform operates in more than 45 countries and powers more than 200 of the world's largest cloud marketplaces, managing mo...
With digital video content creation going viral and assuming the bulk of Internet traffic, how can the deluge of video content be analyzed effectively to derive insights and ROI? After all, video is not only huge in size, but it is complex given various visual, audio and temporal elements. Video summarization (a mechanism for generating a short video summary via key frame analysis or video skimming) has become a popular research topic industry-wide and across academia. Video thumbnail generation...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio a...
Containerized software is riding a wave of growth, according to latest RightScale survey. At Sematext we see this growth trend via our Docker monitoring adoption and via Sematext Docker Agent popularity on Docker Hub, where it crossed 1M+ pulls line. This rapid rise of containers now makes Docker the top DevOps tool among those included in RightScale survey. Overall Docker adoption surged to 35 percent, while Kubernetes adoption doubled, going from 7% in 2016 to 14% percent.
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to ...
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpd...