IBM Orchestration Pipelines
You can orchestrate an end-to-end flow of assets from creation through deployment on a graphical canvas with the Pipelines editor. You can assemble and configure a pipeline to create, train, deploy, and update machine learning models and Python scripts.
For information on supported regions for Orchestration Pipelines, see Regional limitations for Cloud Pak for Data as a Service.
Design a pipeline by dragging nodes onto the canvas, specifying objects and parameters, then running and monitoring the pipeline.
Automating the path to production
Putting a model into a product is a multi-step process. Data must be loaded and processed, models must be trained and evaluated before they are deployed and tested. AI experiments, data analysis and machine learning models all require more observation, evaluation, and updating over time to avoid bias or drift.
The following graphic shows one such example of a model lifecycle that you can automate out of many possible flows that you can create.
You can automate the pipeline to:
- load and process data securely from a wide range of internal sources and external connections.
- get the results that you want by building, running, evaluating, and deploying models or run scripts in a cohesive way.
- make it simple to run the paths of your flow by creating branches and collect results with direct visuals.
Pipelines can run experiments including but not limited to:
- AutoAI experiments
- Jupyter Notebook jobs
- Data Refinery jobs
- SPSS Modeler jobs
To shorten the time from conception to production, you can assemble the pipeline and rapidly update and test modifications. The Pipelines canvas provides tools to visualize the pipeline, customize it at run time with pipeline parameter variables, and then run it as a trial job or on a schedule.
Use the Pipelines editing tools for more cohesive collaboration between a data scientist and a ModelOps engineer. A data scientist can create and train a model. A ModelOps engineer can then automate the process of training, deploying, and evaluating the model after it is published to a production environment.
Use cases and tutorials
You can include IBM Orchestration Pipelines in your data fabric solution to manage and automate your data and AI lifecycle. For more information on how Data Fabric can support your machine learning goals and operations in practical ways, see Use cases.
- Data science and MLOps use case describes how to manage data, build a model building and deployment pipeline, and evaluate model fairness and performance.
- Data science and MLOps tutorial: Orchestrate an AI pipeline with data integration
- Data science and MLOps tutorial: Orchestrate an AI pipeline with model monitoring
Additional resources
For more information, see this blog post about automating the AI lifecycle with a pipeline flow.
Learn more
- Add a pipeline to your project and get to know the canvas tools.
- Run the built-in sample pipeline to try running a hands-on pipeline flow.
- Create a pipeline to create an end-to-end flow of your customized scenario.
- Running and saving pipelines to manage your pipeline as assets.