|By Jeremy Geelan||
|June 7, 2013 06:00 AM EDT||
Big Data applications such as Hadoop and Hive are becoming more widely adopted and mainstream. There are increasing numbers of users who will select the cloud – whether private or public - as an efficient and scalable deployment vehicle for these large-scale distributed apps.
Hadoop implementations can involve deployment of dozens to thousands of application nodes - a scale that can become very time-consuming to manage. Embracing Big Data applications is one thing, but users will struggle to manage them due to a lack of tools designed for such complex applications in the cloud. New solutions are required to enable far simpler setup, configuration and provisioning of complex Hadoop and Hive deployments on a large scale – and also to manage them over their continuing lifecycle.
In his session next week at 12th Cloud Expo | Cloud Expo New York [June 10-13, 2013], Paul Speciale will demonstrate how to speed-up Hadoop deployments, and discuss the issues related to managing large distributed Workloads in public and private clouds. There is increasing need for solutions that manage these types of distributed Workloads holistically, rather than as piecemeal servers. Through new model-driven technologies, we cannot only provision multi-tier and distributed applications in a few simple clicks - but we can also enable ongoing lifecycle management of these apps as they scale and evolve over time.
Paul Speciale is Chief Marketing Officer at Appcara where he leads the Marketing and Alliances for AppStack. He is fortunate to have been part of several early Cloud computing companies, including VP Product Management at Q-layer, one of the first cloud orchestration companies (acquired by Sun Microsystems in 2009), Savvis – where he lead the launch of the Savvis VPDC service, and Amplidata, a leader in cloud storage solutions. He has over 20 years of experience with a number of startups and Fortune 500 companies in storage, data management, and cloud computing.