SYS-CON MEDIA Authors: Sematext Blog, Liz McMillan, Elizabeth White, Carmen Gonzalez, Pat Romanski

Blog Feed Post

Big Data Isn’t a Thing; Big Data is a State of Mind

“Big Data is dead.” “Big Data is passé.”

“We no longer need Big Data; we need Machine Learning now.”

As we end 2017 and look forward to big (data) things in 2018, the most important lessons of 2017 – in fact, maybe the most important lesson going forward – is that Big Data is NOT a thing. Big Data isn’t about the volume, variety or velocity of data any more than car racing is about the gasoline. Big Data is a state of mind. Big Data is about becoming more effective at leveraging data and analytics to power your business models (see Figure 1).

Figure 1: Becoming More Effective at Leveraging Big Data to Power your Business

 

Big Data is a State of Mind

Big Data is about improving an organization’s ability to leverage data and analytics to power their business models; to optimize key business and operational use cases; reduce security and compliance risk; to uncover new revenue opportunities; and create more compelling, differentiated customer engagements. The technical components – building blocks – of a “big data state of mind” include:

  • Data: Ability to collect and aggregate detailed data from a wide variety of data sources including structured (tables, relational databases), semi-structured (logs files, XML, JSON) and unstructured data sources (text, video, audio, images).
  • Analytics: Ability to leverage advanced analytics (data science, deep learning, machine learning, artificial intelligence) to uncover customer, product, service, operational, and market insights.

These are important technology building blocks, but by themselves, they provide NO business or financial value. These are necessary but not sufficient capabilities for driving the most important aspect of Big Data – Data Monetization!

Big Data is About Data Monetization

Big Data is about exploiting the unique characteristics of data and analytics as digital assets to create new sources of economic value for the organization. Most assets exhibit a one-to-one transactional relationship. For example, the quantifiable value of a dollar as an asset is finite – it can only be used to buy one item or service at a time. Same with human assets, as a person can only do one job at a time. But measuring the value of data as an asset is not constrained by those transactional limitations. In fact, data is an unusual asset as it exhibits an Economic Multiplier Effect, whereby it never depletes or wears out and can be used simultaneously across multiple use cases at near zero margin cost. This makes data a powerful asset in which to invest (see Figure 2).

Figure 2: Economic Multiplier Effect

 

Understanding the economic characteristics of data and analytics as digital assets is the first step in monetizing your data via predictive, prescriptive and preventative analytics.

See the blog series at “Determining Economic Predicted Value of Data (EPvD) Series” for more insights about how organizations can exploit the unique economic characteristics of data and analytics as digital assets.

Big Data is a Business Discipline

Leading organizations that embrace digital transformation see data and analytics as a business discipline, not just another IT task. And tomorrow’s business leaders must become experts at leveraging data and analytics to power their business models. The most valuable companies today (from a market cap perspective) are those organizations that are mastering the use of Big Data (with artificial intelligence, machine learning, deep learning) to derive and drive new sources of value (see Figure 3).

Figure 3: Most Valuable Companies in the World

 

At the University of San Francisco, I teach the “Big Data MBA” where I am educating tomorrow’s business leaders how to embrace data and analytics as the next modern business discipline. A Master of Business Administration (MBA) provides theoretical and practical training to teach business leaders important business disciplines such as accounting, finance, operations management and marketing. We want to treat analytics as a similar business discipline.

Data Science is the Data Monetization Engine

Data Science is used to identify the variables and metrics that might be better predictors of business and operational performance, and to quantify cause-and-effect in order to predict likely actions and outcomes; prescribe corrective actions or recommendations; prevent costly outcomes; and continuously learn and adapt as the environment changes.

To do that, data scientists need to learn a wide variety of statistical, data mining, deep learning, machine learning, and artificial intelligence techniques and tools (see Figure 4).

Figure 4: Examples of Advanced Analytics

 

Data monetization requires close collaboration with business stakeholders who own the important responsibility of setting the business and analytics strategy. These stakeholders also unambiguously define the hypotheses to be tested, and articulate how the resulting analytic outcomes will be operationalized and monetized. The key to enlisting business leadership is to turn them into “Citizens of Data Science” and to teach them to “Think Like a Data Scientist.”

This includes:

  • Use case identification, validation and prioritization that begins with an end in mind.
  • Develop personas for each key business stakeholder and constituent to understand their responsibilities, key decisions, and impediments to success.
  • Brainstorming variables and metrics that might be better predictors of performance.
  • Creating actionable, prescriptive analytic insights and recommendations that drive measurably better operational and business decisions.
  • Articulating how the analytic outcomes will be operationalize or put into action.

Check out the infographic “Think Like A Data Scientist” for more information. It also includes a workbook that guides the “thinking like a data scientist” process.

A Big Data State of Mind

One of my favorite articles (So, What Is Machine Learning Anyways?) does a great job of summarizing the important relationship between Big Data and Machine Learning:

  • Big Data started when the Internet created a treasure trove of website and search data. Today that data has been augmented by social media, mobile, wearables, IOT, and even microphones and cameras that are constantly collecting information.
  • With so much data readily available, machine learning provides a method to organize that data into meaningful patterns. Machine learning sorts through those troves of data to discern patterns and predict new ones.
  • Machine learning plays a key role in the development of artificial intelligence. Artificial intelligence refers to a machine’s ability to perform intelligent tasks, whereas machine learning refers to the automated process by which machines weed out meaningful patterns in data. Without machine learning, artificial intelligence as wouldn’t be possible.

Though there are many critical building blocks associated with Big Data, the leading organizations are quickly realizing the Big Data isn’t a thing.

Big Data is a mindset about transforming business leadership to become more effective at leveraging data and analytics to power the organization’s business models (see Figure 5).

Figure 5: Leveraging Data and Analytics to Create an Intelligent Enterprise

 

So, how effective is your organization at leveraging #BigData and #MachineLearning to power your business models and create an intelligent organization?

The post Big Data Isn’t a Thing; Big Data is a State of Mind appeared first on InFocus Blog | Dell EMC Services.

Read the original blog entry...

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Hitachi Vantara as CTO, IoT and Analytics.

Previously, as a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

Latest Stories
Serverless applications increase developer productivity and time to market, by freeing engineers from spending time on infrastructure provisioning, configuration and management. Serverless also simplifies Operations and reduces cost - as the Kubernetes container infrastructure required to run these applications is automatically spun up and scaled precisely with the workload, to optimally handle all runtime requests. Recent advances in open source technology now allow organizations to run Serv...
The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
As the fourth industrial revolution continues to march forward, key questions remain related to the protection of software, cloud, AI, and automation intellectual property. Recent developments in Supreme Court and lower court case law will be reviewed to explain the intricacies of what inventions are eligible for patent protection, how copyright law may be used to protect application programming interfaces (APIs), and the extent to which trademark and trade secret law may have expanded relev...
Containerized software is riding a wave of growth, according to latest RightScale survey. At Sematext we see this growth trend via our Docker monitoring adoption and via Sematext Docker Agent popularity on Docker Hub, where it crossed 1M+ pulls line. This rapid rise of containers now makes Docker the top DevOps tool among those included in RightScale survey. Overall Docker adoption surged to 35 percent, while Kubernetes adoption doubled, going from 7% in 2016 to 14% percent.
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
In today's always-on world, customer expectations have changed. Competitive differentiation is delivered through rapid software innovations, the ability to respond to issues quickly and by releasing high-quality code with minimal interruptions. DevOps isn't some far off goal; it's methodologies and practices are a response to this demand. The demand to go faster. The demand for more uptime. The demand to innovate. In this keynote, we will cover the Nutanix Developer Stack. Built from the foundat...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Th...
Emil Sayegh is an early pioneer of cloud computing and is recognized as one of the industry's true veterans. A cloud visionary, he is credited with launching and leading the cloud computing and hosting businesses for HP, Rackspace, and Codero. Emil built the Rackspace cloud business while serving as the company's GM of the Cloud Computing Division. Earlier at Rackspace he served as VP of the Product Group and launched the company's private cloud and hosted exchange services. He later moved o...
Today, Kubernetes is the defacto standard if you want to run container workloads in a production environment. As we set out to build our next generation of products, and run them smoothly in the cloud, we needed to move to Kubernetes too! In the process of building tools like KubeXray and GoCenter we learned a whole bunch. Join this talk to learn how to get started with Kubernetes and how we got started at JFrog building our new tools. After the session you will know: How we got to Kuberne...
The Crypto community has run out of anarchists, libertarians and almost absorbed all the speculators it can handle, the next 100m users to join Crypto need a world class application to use. What will it be? Alex Mashinsky, a 7X founder & CEO of Celsius Network will discuss his view of the future of Crypto.
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infra...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data e...