SYS-CON MEDIA Authors: Elizabeth White, Zakia Bouachraoui, Liz McMillan, Janakiram MSV, Carmen Gonzalez

Blog Feed Post

AI Meets AI: The Key to Actually Implementing AI

With AI use case scenarios becoming more complex, the actual implementation of AI has also grown more challenging. However, a new way of delivering AI may be on the horizon.

An unprecedented amount of progress was made with AI and machine learning in 2017, as numerous companies deployed these technologies in real-world applications. This trend is projected to hold true through the near future, with some analysts like Gartner predicting that AI technologies will be in every new software product by 2020. From healthcare diagnosis to predictive maintenance for machines to conversational chatbots, there is no question that AI is quickly becoming a fundamental requirement for modern businesses.

However, despite the market buzz, many companies are still stumped by the prospect of deriving actual business value from the use of AI. In fact, the introduction of AI to actual products and solutions remains one of the leading sticking points for businesses, with many left asking, “How do I actually implement an AI solution?”

The Growing Complexity of AI Application

As AI goes from a “nice to have” to a “need to have,” it’s also evolving in terms of complexity. Companies need more than just simple, standardized AI services that do image or text recognition—they need complex predictive scenarios that are highly specific to their operations and customized for their business needs.

For example, take a scenario that uses time series data to generate business insights, such as predictive maintenance for the Industrial Internet of Things (IIoT) or customer churn analysis for a customer experience organization. These scenarios can’t be supported by simply calling a generic service with a few specific parameters and getting a result. Getting accurate and actionable results in these predictive scenarios requires a lot of data science work, with data being used over time to iteratively train the models and improve the accuracy and quality of the output. Additionally, businesses are being challenged to engineer new features, run and test many different models and determine the right mix of models to provide the most accurate result—and that’s just to determine what needs to be implemented in a production environment.

Moreover, businesses need to realize that AI is no longer the exclusive domain of data scientists and the engineers that help prepare the data. The situation is not unlike how digital transformation has branched out from being an IT-driven initiative to a company-wide effort. Organizations must move beyond a siloed AI approach that divides the analytics team and the app development team. App developers need to become more knowledgeable about the data science lifecycle and app designers need to think about how predictive insights can drive the application experience.

To be successful, organizations must identify an approach that enables them to easily put models into production in a language that is appropriate for runtime—without rewriting the analytical model. Organizations need to not only optimize their initial models but also feed data and events back to the production model so that it can be continuously improved upon.

This may seem like a big, complicated process, but it’s key to the actual implementation of AI—the AI of AI, if you will. AI will become unreachable to your organization if you cannot do this.

The New World of AI

So how can organizations effectively implement AI in a way that enables them to address complex predictive scenarios with limited data science resources? And how do organizations achieve success without retraining their entire development team?

The truth of the matter is that it can’t be done by simply creating a narrowly defined, one-size-fits-all approach that will get you results with only a few parameters. It requires a more complex implementation to be insightful, actionable and valuable to the business.

Take, for example, an IIoT predictive maintenance application that analyzes three months of time series data from sensors on hundreds or thousands of machines and returns the results automatically. This isn’t a simple predictive result set that is returned, but a complete set of detected anomalies that occurred over that time, with prioritized results to eliminate the alert storms that previously made it impossible to operationalize the results. These prioritized results are served up via a work order on a mobile app to the appropriate regional field service personnel, who are then able to perform the necessary maintenance to maximize machine performance. It’s a complex process where the machine learning is automated and feature engineering is done in an unsupervised fashion. The provided results analyze individual sensor data, machine-level data and machine population data and are packaged up in format that enables the business to take action.

Welcome to the new world of AI implementation. While it’s a very new concept, the best market definition of this process is currently “anomaly detection.” But not all solutions take the same approach and not all solutions deliver predictions that lead to better business outcomes.What you are about to see is a fundamental shift in how machine learning capabilities are delivered—and we aren’t just talking deployment in the cloud versus on-premise. We are talking about a shift from delivering data science tools that make the data scientists more effective to data science results that eliminate the need for the data scientist to have these tools in the first place. In this brave new world, data scientists would be able to spend their time analyzing and improving the results, instead of wasting their time on non-mission-critical tasks.

The only thing that is required is that the data is provided in a time series format Otherwise, you simply upload the data to the cloud (but on-premise options will exist too) and the automated AI does the rest, with accurate results returned within days.

Soon you can move from dreams of AI to actual implementation!

Read the original blog entry...

More Stories By Progress Blog

Progress offers the leading platform for developing and deploying mission-critical, cognitive-first business applications powered by machine learning and predictive analytics.

Latest Stories
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
Platform-as-a-Service (PaaS) is a technology designed to make DevOps easier and allow developers to focus on application development. The PaaS takes care of provisioning, scaling, HA, and other cloud management aspects. Apache Stratos is a PaaS codebase developed in Apache and designed to create a highly productive developer environment while also supporting powerful deployment options. Integration with the Docker platform, CoreOS Linux distribution, and Kubernetes container management system ...
Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. It even comes with Prometheus to store the metrics for you and pre-built Grafana dashboards to show exactly what is important for your services - success rate, latency, and throughput. In this session, we'll explain what Linkerd provides for you, demo the installation of Linkerd on Kubernetes and debug a real world problem. We will also dig into what functionality you can build on ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Th...
After years of investments and acquisitions, CloudBlue was created with the goal of building the world's only hyperscale digital platform with an increasingly infinite ecosystem and proven go-to-market services. The result? An unmatched platform that helps customers streamline cloud operations, save time and money, and revolutionize their businesses overnight. Today, the platform operates in more than 45 countries and powers more than 200 of the world's largest cloud marketplaces, managing mo...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio a...
Containerized software is riding a wave of growth, according to latest RightScale survey. At Sematext we see this growth trend via our Docker monitoring adoption and via Sematext Docker Agent popularity on Docker Hub, where it crossed 1M+ pulls line. This rapid rise of containers now makes Docker the top DevOps tool among those included in RightScale survey. Overall Docker adoption surged to 35 percent, while Kubernetes adoption doubled, going from 7% in 2016 to 14% percent.
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to ...
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpd...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infra...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.
Public clouds dominate IT conversations but the next phase of cloud evolutions are "multi" hybrid cloud environments. The winners in the cloud services industry will be those organizations that understand how to leverage these technologies as complete service solutions for specific customer verticals. In turn, both business and IT actors throughout the enterprise will need to increase their engagement with multi-cloud deployments today while planning a technology strategy that will constitute a ...
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected pat...
While more companies are now leveraging the cloud to increase their level of data protection and management, there are still many wondering “why?” The answer: the cloud actually brings substantial advancements to the data protection and management table that simply aren’t possible without it. The easiest advantage to envision? Unlimited scalability. If a data protection tool is properly designed, the capacity should automatically expand to meet any customer’s needs. The second advantage: the ...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...