SYS-CON MEDIA Authors: Pat Romanski, Gary Arora, Zakia Bouachraoui, Yeshim Deniz, Liz McMillan

Blog Feed Post

Twitter in the Data Center: A model for data consumption

Twitter has been ridiculously successful at embedding itself into the lives of hundreds of millions. Part of its success is that the service lends itself to a variety of use cases depending on its users' consumption models. These use cases are actually a social manifestation of common data sharing practices. And the same models that helped Twitter raise close to $2B in its IPO are relevant across the infrastructure that makes up all data centers.

Twitter is essentially a message bus. Individual users choose to publish when they want to, and can subscribe to content that is important to them for some reason. Content can be consumed in whatever way makes sense for the subscriber – in realtime, at set intervals, when it is directed to them specifically, or whenever something particularly interesting is going on. The magic in Twitter is in relaying information, not in dictating a specific consumption model or requisite set of actions as a result of any of that content.

The same consumption models exist in the data center.

  • The update: When something important happens, it sometimes makes sense to let everyone know. When a new application instance is deployed, it likely starts with the server. The act of setting up the server generates information that might be of interest to other elements within the data center. The specific application might require some allocation of storage or some network configuration (VLAN, ACL, QOS, whatever). By sending out a general update, followers can take appropriate action to ensure a more automated and orchestrated response.
  • The follow: Not all constituents are interesting to everyone. It might not matter to the load balancers what the application performance monitoring tools are doing. Rather than clutter their data timeline, they follow only those elements that are producing content that is relevant to them. This simplifies data consumption and reduces overhead on the subscriber side.
  • The list: It could be that there are lots of interesting sources to follow, but the sum of all of the updates is overwhelming. In this case, updates can be grouped into relevant streams, each of which is consumed differently. It might be sufficient to simply monitor some updates while other updates require careful consideration and subsequent action. For instance, it might be interesting for servers to monitor changes in network state but not necessarily meaningful to act on all changes. Additionally, some streams might require more constant attention with tighter windows around activity, while others can be periodically parsed for general updates.
  • Intermittent monitoring: Some entities might only parse relevant updates periodically. It is not important to stay up-to-date in realtime, and it might not even be important to pay careful attention to every update. They want to consume content asynchronously and in batches. Analytics tools, for example, might be able to poll periodically and report overall health without needing to consume a realtime feed.
  • Trendspotting: Individual updates are interesting, but when multiple sources all report the same thing, it becomes newsworthy. An error message, for example, might indicate a random issue. But a flood of error messages from multiple data center entities might indicate more serious issues that require attention (perhaps a DDOS attack)
  • Message threading: The threading function within Twitter is simply a sort to help provide context and preserve temporal order around some exchange. This is very similar to reviewing changes or state information during common troubleshooting tasks.  

The thing of central importance in all of the consumption models is the data. In Twitter's case, the 140-character update is the data. The users determine what that data is, with whom that data is shared, and ultimately how that data is consumed. Twitter neither produces the updates nor consumes them. Its sole function is to relay those updates to the appropriate subscribers and to allow data access to those doing searches. 

When this is working well, Twitter's message bus is a powerful enabler of human orchestration. Twitter's role in the Arab Spring uprisings has been well-documented. Entire movements have been coordinated across the globe using Twitter as a means to broadcast organizing thoughts. In most of these cases, the origin of the information was not even directly connected to its recipients. Merely publishing information was enough to spur action.

When our industry talks about orchestration in the data center, it need not be that different. Orchestration doesn't require a tight linkage between all elements within the data center ecosystem. Orchestration only requires that data be made available as and when it is needed. The rules for data consumption ought not be uniformly applied. Individual elements will consume information in different ways depending on what their needs are. 

This is all to say that delegating application workloads to resources across the data center does not rely on the existence of a tightly-integrated system. Integration and orchestration serve different needs. Integration is about performance – controlling both sides of an interface allows for fine-grained optimization required to eke out every last bit of performance available. Orchestration is about seamless handoff between resources. 

The SDN movement broadly can be applied to both performance and workflow automation. Different use cases demand one, the other, or both. But architects and administrators will be best served by explicitly determining whether their objective is integration or orchestration. The differences go well beyond semantics. The architectural implications are profound.

[Today's fun fact: Canadian researchers have found that Einstein's brain was 15% wider than normal. And you thought it was the hair.]
 

The post Twitter in the Data Center: A model for data consumption appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Latest Stories
Every organization is facing their own Digital Transformation as they attempt to stay ahead of the competition, or worse, just keep up. Each new opportunity, whether embracing machine learning, IoT, or a cloud migration, seems to bring new development, deployment, and management models. The results are more diverse and federated computing models than any time in our history.
On-premise or off, you have powerful tools available to maximize the value of your infrastructure and you demand more visibility and operational control. Fortunately, data center management tools keep a vigil on memory contestation, power, thermal consumption, server health, and utilization, allowing better control no matter your cloud's shape. In this session, learn how Intel software tools enable real-time monitoring and precise management to lower operational costs and optimize infrastructure...
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Concerns about security, downtime and latency, budgets, and general unfamiliarity with cloud technologies continue to create hesitation for many organizations that truly need to be developing a cloud strategy. Hybrid cloud solutions are helping to elevate those concerns by enabling the combination or orchestration of two or more platforms, including on-premise infrastructure, private clouds and/or third-party, public cloud services. This gives organizations more comfort to begin their digital tr...
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions.
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. But real cloud success, at scale, requires much more than a basic lift-and-shift migrati...
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, low-cost, and reliable cloud storage. Prior to Wasabi, David co-founded Carbonite, one of the world's leading cloud backup companies. A successful tech entrepreneur for more than 30 years, David got his start at ARP Instruments, a manufacturer of synthesizers for rock bands, where he worked with leading musicians of the day like Stevie Wonder, Pete Townsend of The Who, and Led Zeppelin. David has ...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Addteq is a leader in providing business solutions to Enterprise clients. Addteq has been in the business for more than 10 years. Through the use of DevOps automation, Addteq strives on creating innovative solutions to solve business processes. Clients depend on Addteq to modernize the software delivery process by providing Atlassian solutions, create custom add-ons, conduct training, offer hosting, perform DevOps services, and provide overall support services.