The challenges of MLOps: AI at the heart of application production

Anticipate risks thanks to appropriate methodologies, tools, and governance. Here we will look at the technical aspect of artificial intelligence (AI) and how it works. Any data/AI project aims to achieve high data quality, coupled with maximum automation of the associated processes.


The exponential increase in data has led to the creation of new jobs in the company, new technologies, and the creation of data labs, spaces for testing, and innovating from data. But the transition from the data lab to the deployment in production of analyzes is difficult, their developments not sharing the same requirements of software development. As a result, we are seeing a sharp increase in AI-based products being rejected in production, or unmaintained and obsolete.

The MLOps aims to increase the number of AI-based products in production while improving their level of quality through better collaboration between business analysts, data scientists, ML engineers, and operations, reducing the time between each release. It refers to different areas of AI: artificial vision, natural language processing, machine learning.

The main components of MLOps are:

an AI marketplace to share reusable models, datasets and ready-to-use solutions supervision of “analytics” to manage performance and ensure observability the orchestration of “analytics” to manage pipelines from tests to deployment in environments the data science AGL, the AI ​​studio to develop the models the model repository to hold learning strategies and related metadata structuring data and creating predictor variables.
Methodology: Among the methodologies applied, Crisp DM is one of the most used for data exploration and model building. On the other hand, the methodology does not take into account the specific needs of the deployment-related to the company’s standards and does not meet all the needs of the different actors when developing applications based on AI. Indeed, operations need stability, ML Engineers want to evolve their models, Data Scientists need more data to create more value for users, and finally, businesses want a product based on AI that is stable and monetizable.

Any data/AI project aims to achieve high data quality, coupled with maximum automation of the associated processes. These challenges are achievable by setting up an iteration loop that makes it possible to confirm all the steps from start to finish, from the choice of the data set and the automated ingestion of the data defined in the governance, to to the design and industrialization of use cases and models. The transition from one iteration to another is confirmed by the successful execution of automated unit and functional tests. The implementation of these tests makes it possible to avoid the tunnel effect linked to tests carried out only at the end of the project and to discover critical design problems as soon as possible. Continuous performance monitoring can automatically identify and re-trigger model training many times to meet the requirements of the use case.

During the build phase, AI-based applications should be considered as a software engineering problem including in addition to the traditional dev/test/deploy/run steps a new learning step. Good software engineering practices also apply to three artifacts: code, data, and model.

To apply the software engineering process, Data Scientists (DS) and ML Engineers (MLE) must use and adapt the best practices from the Agile & DevOps movements of software engineering, namely: proceed by regular and rapid releases, and apply a strict version control. Deployment and integration must be automated, just like the testing phase. To automate this process and thus make it repeatable. Finally, the development of operating indicators, logs, and alerts must be the responsibility of the DS & MLE.

This implies that the Data Scientist-ML Engineer tandem is responsible for the quality of the code produced from the exploration phases, which requires a change in practices: systematically use a versioning tool such as git with the correct commit message,
make changes in small increments, integrate supervision as a non-functional requirement, integrate automated tests and data sets into the deliveries, these tests have to cover not only the code but also the model and the data processing.
Technologies: There are many tools and solutions on the market. Cloud providers offer very interesting offers that promise rapid deployments. Data toolkit products are backed by significant investment. The world of Open Source and hyper-scale solutions are evolving very quickly.

Governance Developing an MLOps culture bridges the gap between teams and drives the industrialization of AI-based applications. This is possible by sharing a common vision on the concepts of AI, by defining a strategy supported by all stakeholders, and by defining the means to be implemented within the framework of data/AI projects. Then, the MLOPs culture is favored by the establishment of communities that promote a culture of sharing. Finally, considering AI-based applications as a product facilitates their dissemination. This postulate involves setting up product teams for their design, development, and team training.

MLS is a buzzword. As with many other methods, its successful implementation requires a cultural change within organizations. To industrialize AI in production, companies must develop internal skills that make them able to adapt to multiple use cases.