SYS-CON MEDIA Authors: Elizabeth White, Maria C. Horton, Andy Thurai, Liz McMillan, Zakia Bouachraoui

Blog Feed Post

Twitter in the Data Center: A model for data consumption

Twitter has been ridiculously successful at embedding itself into the lives of hundreds of millions. Part of its success is that the service lends itself to a variety of use cases depending on its users' consumption models. These use cases are actually a social manifestation of common data sharing practices. And the same models that helped Twitter raise close to $2B in its IPO are relevant across the infrastructure that makes up all data centers.

Twitter is essentially a message bus. Individual users choose to publish when they want to, and can subscribe to content that is important to them for some reason. Content can be consumed in whatever way makes sense for the subscriber – in realtime, at set intervals, when it is directed to them specifically, or whenever something particularly interesting is going on. The magic in Twitter is in relaying information, not in dictating a specific consumption model or requisite set of actions as a result of any of that content.

The same consumption models exist in the data center.

  • The update: When something important happens, it sometimes makes sense to let everyone know. When a new application instance is deployed, it likely starts with the server. The act of setting up the server generates information that might be of interest to other elements within the data center. The specific application might require some allocation of storage or some network configuration (VLAN, ACL, QOS, whatever). By sending out a general update, followers can take appropriate action to ensure a more automated and orchestrated response.
  • The follow: Not all constituents are interesting to everyone. It might not matter to the load balancers what the application performance monitoring tools are doing. Rather than clutter their data timeline, they follow only those elements that are producing content that is relevant to them. This simplifies data consumption and reduces overhead on the subscriber side.
  • The list: It could be that there are lots of interesting sources to follow, but the sum of all of the updates is overwhelming. In this case, updates can be grouped into relevant streams, each of which is consumed differently. It might be sufficient to simply monitor some updates while other updates require careful consideration and subsequent action. For instance, it might be interesting for servers to monitor changes in network state but not necessarily meaningful to act on all changes. Additionally, some streams might require more constant attention with tighter windows around activity, while others can be periodically parsed for general updates.
  • Intermittent monitoring: Some entities might only parse relevant updates periodically. It is not important to stay up-to-date in realtime, and it might not even be important to pay careful attention to every update. They want to consume content asynchronously and in batches. Analytics tools, for example, might be able to poll periodically and report overall health without needing to consume a realtime feed.
  • Trendspotting: Individual updates are interesting, but when multiple sources all report the same thing, it becomes newsworthy. An error message, for example, might indicate a random issue. But a flood of error messages from multiple data center entities might indicate more serious issues that require attention (perhaps a DDOS attack)
  • Message threading: The threading function within Twitter is simply a sort to help provide context and preserve temporal order around some exchange. This is very similar to reviewing changes or state information during common troubleshooting tasks.  

The thing of central importance in all of the consumption models is the data. In Twitter's case, the 140-character update is the data. The users determine what that data is, with whom that data is shared, and ultimately how that data is consumed. Twitter neither produces the updates nor consumes them. Its sole function is to relay those updates to the appropriate subscribers and to allow data access to those doing searches. 

When this is working well, Twitter's message bus is a powerful enabler of human orchestration. Twitter's role in the Arab Spring uprisings has been well-documented. Entire movements have been coordinated across the globe using Twitter as a means to broadcast organizing thoughts. In most of these cases, the origin of the information was not even directly connected to its recipients. Merely publishing information was enough to spur action.

When our industry talks about orchestration in the data center, it need not be that different. Orchestration doesn't require a tight linkage between all elements within the data center ecosystem. Orchestration only requires that data be made available as and when it is needed. The rules for data consumption ought not be uniformly applied. Individual elements will consume information in different ways depending on what their needs are. 

This is all to say that delegating application workloads to resources across the data center does not rely on the existence of a tightly-integrated system. Integration and orchestration serve different needs. Integration is about performance – controlling both sides of an interface allows for fine-grained optimization required to eke out every last bit of performance available. Orchestration is about seamless handoff between resources. 

The SDN movement broadly can be applied to both performance and workflow automation. Different use cases demand one, the other, or both. But architects and administrators will be best served by explicitly determining whether their objective is integration or orchestration. The differences go well beyond semantics. The architectural implications are profound.

[Today's fun fact: Canadian researchers have found that Einstein's brain was 15% wider than normal. And you thought it was the hair.]
 

The post Twitter in the Data Center: A model for data consumption appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Latest Stories
While the focus and objectives of IoT initiatives are many and diverse, they all share a few common attributes, and one of those is the network. Commonly, that network includes the Internet, over which there isn't any real control for performance and availability. Or is there? The current state of the art for Big Data analytics, as applied to network telemetry, offers new opportunities for improving and assuring operational integrity. In his session at @ThingsExpo, Jim Frey, Vice President of S...
"We were founded in 2003 and the way we were founded was about good backup and good disaster recovery for our clients, and for the last 20 years we've been pretty consistent with that," noted Marc Malafronte, Territory Manager at StorageCraft, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
According to the IDC InfoBrief, Sponsored by Nutanix, “Surviving and Thriving in a Multi-cloud World,” multicloud deployments are now the norm for enterprise organizations – less than 30% of customers report using single cloud environments. Most customers leverage different cloud platforms across multiple service providers. The interoperability of data and applications between these varied cloud environments is growing in importance and yet access to hybrid cloud capabilities where a single appl...
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
In today's always-on world, customer expectations have changed. Competitive differentiation is delivered through rapid software innovations, the ability to respond to issues quickly and by releasing high-quality code with minimal interruptions. DevOps isn't some far off goal; it's methodologies and practices are a response to this demand. The demand to go faster. The demand for more uptime. The demand to innovate. In this keynote, we will cover the Nutanix Developer Stack. Built from the foundat...
"Cloud computing is certainly changing how people consume storage, how they use it, and what they use it for. It's also making people rethink how they architect their environment," stated Brad Winett, Senior Technologist for DDN Storage, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Sold by Nutanix, Nutanix Mine with Veeam can be deployed in minutes and simplifies the full lifecycle of data backup operations, including on-going management, scaling and troubleshooting. The offering combines highly-efficient storage working in concert with Veeam Backup and Replication, helping customers achieve comprehensive data protection for all their workloads — virtual, physical and private cloud —to meet increasing business demands for uptime and productivity.
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessio...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
Historically, some banking activities such as trading have been relying heavily on analytics and cutting edge algorithmic tools. The coming of age of powerful data analytics solutions combined with the development of intelligent algorithms have created new opportunities for financial institutions. In his session at 20th Cloud Expo, Sebastien Meunier, Head of Digital for North America at Chappuis Halder & Co., discussed how these tools can be leveraged to develop a lasting competitive advantage ...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...