EXO: End-to-End Local AI Cluster Deployment Training Course
EXO is an open-source framework that links Apple Silicon devices into a distributed AI cluster, allowing local inference of cutting-edge models that exceed the capacity of a single device.
This instructor-led, live training (available online or onsite) targets system administrators and DevOps engineers looking to deploy, configure, and manage EXO clusters for private LLM inference across multiple Apple Silicon or Linux nodes.
Upon completion of this training, participants will be able to:
- Install and set up EXO on macOS and Linux nodes.
- Activate automatic device discovery and construct multi-node clusters.
- Enable and verify RDMA over Thunderbolt 5 for ultra-low-latency inter-device communication.
- Deploy frontier models (such as DeepSeek, Qwen, and Llama) across clustered devices.
- Monitor cluster health and troubleshoot common deployment challenges.
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practical sessions.
- Hands-on implementation in a live-lab environment.
Customization Options
- To request tailored training, please contact us to arrange details.
Course Outline
Introduction to EXO and Local AI Clustering
- Overview of the EXO framework and the exo-explore ecosystem.
- Comparing centralized cloud inference versus distributed local inference.
- Architecture: libp2p device discovery, MLX backend, dashboard, and API layers.
- Hardware requirements: Apple Silicon (M3 Ultra, M4 Pro/Max), Thunderbolt 5, shared storage.
Installing EXO on macOS
- Setting up Xcode, the Metal ToolChain, and macOS prerequisites.
- Installing uv, Node.js, and the Rust nightly toolchain.
- Installing the pinned macmon fork for Apple Silicon monitoring.
- Cloning the repository and building the dashboard with npm.
- Running EXO from source and verifying the localhost:52415 dashboard.
Installing EXO on Linux
- Installing dependencies via apt or Homebrew on Linux.
- Configuring uv, Node.js 18+, and the Rust nightly toolchain.
- Building the dashboard and running EXO in CPU-only mode.
- Directory layout: XDG Base Directory paths for config, data, cache, and logs.
Automatic Device Discovery and Cluster Formation
- Understanding libp2p-based auto-discovery across local networks.
- Configuring custom namespaces using EXO_LIBP2P_NAMESPACE for cluster isolation.
- Verifying node membership in the dashboard cluster view.
- Handling discovery failures and network segmentation issues.
Enabling RDMA over Thunderbolt 5
- Understanding RDMA architecture and the claimed 99 percent latency reduction.
- Enabling RDMA in macOS Recovery mode with rdma_ctl.
- Cable requirements and port topology constraints on Mac Studio.
- Ensuring macOS versions match across all cluster nodes.
- Troubleshooting RDMA discovery and DHCP configuration.
Deploying Frontier Models
- Using the dashboard to load and shard DeepSeek v3.1, Qwen3-235B, and Llama family models.
- Previewing instance placements via the /instance/previews API endpoint.
- Creating model instances using pipeline or tensor-parallel sharding.
- Configuring custom model cards from the HuggingFace hub.
Monitoring and Troubleshooting
- Reading EXO logs and understanding distributed tracing.
- Interpreting cluster health in the dashboard cluster view.
- Diagnosing worker node failures and reconnection behavior.
- Using EXO_TRACING_ENABLED for performance bottleneck analysis.
Cluster Maintenance and Updates
- Updating EXO binaries and performing dashboard rebuild procedures.
- Migrating model caches and managing pre-downloaded models over NFS.
- Gracefully removing nodes and rebalancing workloads.
Requirements
- A solid understanding of networking fundamentals (IP, subnetting, firewalls).
- Experience with command-line administration on macOS or Linux.
- Familiarity with Python package management (pip/uv) and Node.js tooling.
Audience
- System administrators.
- DevOps engineers.
- AI infrastructure architects responsible for on-premise LLM deployment.
Open Training Courses require 5+ participants.
EXO: End-to-End Local AI Cluster Deployment Training Course - Booking
EXO: End-to-End Local AI Cluster Deployment Training Course - Enquiry
EXO: End-to-End Local AI Cluster Deployment - Consultancy Enquiry
Upcoming Courses
Related Courses
Advanced LangGraph: Optimization, Debugging, and Monitoring Complex Graphs
35 HoursLangGraph serves as a framework designed for constructing stateful, multi-agent LLM applications through composable graphs, featuring persistent state and precise control over execution flows.
This instructor-led live training, available online or onsite, targets advanced AI platform engineers, AI-focused DevOps professionals, and ML architects who aim to optimize, debug, monitor, and manage production-grade LangGraph systems.
Upon completing this training, participants will be capable of:
- Designing and optimizing complex LangGraph topologies to enhance speed, reduce costs, and improve scalability.
- Ensuring reliability through retries, timeouts, idempotency, and checkpoint-based recovery mechanisms.
- Debugging and tracing graph executions, inspecting states, and systematically reproducing production issues.
- Instrumenting graphs with logs, metrics, and traces, deploying them to production, and monitoring SLAs and costs.
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practical practice.
- Hands-on implementation in a live-lab environment.
Customization Options
- To request customized training for this course, please contact us to arrange.
Building Coding Agents with Devstral: From Agent Design to Tooling
14 HoursDevstral is an open-source framework designed for building and running coding agents that can interact with codebases, developer tools, and APIs to enhance engineering productivity.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level ML engineers, developer-tooling teams, and SREs who wish to design, implement, and optimize coding agents using Devstral.
By the end of this training, participants will be able to:
- Set up and configure Devstral for coding agent development.
- Design agentic workflows for codebase exploration and modification.
- Integrate coding agents with developer tools and APIs.
- Implement best practices for secure and efficient agent deployment.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Open-Source Model Ops: Self-Hosting, Fine-Tuning and Governance with Devstral & Mistral Models
14 HoursDevstral and Mistral are open-source AI technologies designed for flexible deployment, fine-tuning, and scalable integration.
This instructor-led, live training (available online or onsite) targets intermediate to advanced ML engineers, platform teams, and research engineers who wish to self-host, fine-tune, and govern Mistral and Devstral models in production environments.
By the end of this training, participants will be able to:
- Set up and configure self-hosted environments for Mistral and Devstral models.
- Apply fine-tuning techniques to achieve domain-specific performance.
- Implement versioning, monitoring, and lifecycle governance.
- Ensure security, compliance, and responsible usage of open-source models.
Course Format
- Interactive lectures and discussions.
- Hands-on exercises focused on self-hosting and fine-tuning.
- Live-lab implementation of governance and monitoring pipelines.
Course Customization Options
- To request customized training for this course, please contact us to arrange it.
Fiji: Image Processing for Biotechnology and Toxicology
14 HoursThis instructor-led, live training in Greece (online or onsite) is designed for beginner to intermediate-level researchers and laboratory professionals who wish to process and analyze images of histological tissues, blood cells, algae, and other biological samples.
Upon completion of this training, participants will be capable of:
- Navigating the Fiji interface and applying ImageJ’s core functionalities.
- Preprocessing and enhancing scientific images to improve analysis quality.
- Performing quantitative image analysis, such as cell counting and area measurement.
- Automating repetitive tasks through the use of macros and plugins.
- Tailoring workflows to meet specific image analysis requirements in biological research.
LangGraph Applications in Finance
35 HoursLangGraph serves as a framework for constructing stateful, multi-actor LLM applications through composable graphs, enabling persistent state management and precise execution control.
This instructor-led training, available either online or onsite, targets intermediate to advanced professionals seeking to design, implement, and manage LangGraph-based financial solutions with appropriate governance, observability, and compliance standards.
Upon completion of this training, participants will be capable of:
- Designing finance-specific LangGraph workflows that align with regulatory and audit requirements.
- Integrating financial data standards and ontologies into graph states and tooling.
- Implementing reliability, safety measures, and human-in-the-loop controls for critical processes.
- Deploying, monitoring, and optimizing LangGraph systems for performance, cost efficiency, and service level agreements (SLAs).
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practical application.
- Hands-on implementation within a live laboratory environment.
Customization Options
- To request a tailored version of this course, please contact us to make arrangements.
LangGraph Foundations: Graph-Based LLM Prompting and Chaining
14 HoursLangGraph serves as a framework designed for constructing graph-structured Large Language Model (LLM) applications that facilitate planning, branching, tool integration, memory management, and controlled execution.
This instructor-led live training, available either online or onsite, is tailored for beginner-level developers, prompt engineers, and data practitioners eager to design and implement reliable, multi-step LLM workflows using LangGraph.
Upon completion of this training, participants will be able to:
- Articulate core LangGraph concepts—such as nodes, edges, and state—and determine appropriate use cases.
- Develop prompt chains that support branching, tool invocation, and memory retention.
- Incorporate retrieval mechanisms and external APIs into graph-based workflows.
- Test, debug, and evaluate LangGraph applications to ensure reliability and safety.
Course Format
- Interactive lectures paired with facilitated discussions.
- Guided laboratory sessions and code walkthroughs conducted within a sandbox environment.
- Scenario-based exercises focused on design, testing, and evaluation.
Customization Options
- To request tailored training for this course, please contact us to make arrangements.
LangGraph in Healthcare: Workflow Orchestration for Regulated Environments
35 HoursLangGraph facilitates stateful, multi-actor workflows driven by LLMs, offering precise control over execution paths and state persistence. In the healthcare sector, these capabilities are essential for ensuring compliance, enhancing interoperability, and developing decision-support systems that integrate seamlessly with medical workflows.
This instructor-led live training, available both online and onsite, targets intermediate to advanced-level professionals aiming to design, implement, and manage LangGraph-based healthcare solutions while navigating regulatory, ethical, and operational challenges.
Upon completion of this training, participants will be able to:
- Design healthcare-specific LangGraph workflows with a focus on compliance and auditability.
- Integrate LangGraph applications with medical ontologies and standards such as FHIR, SNOMED CT, and ICD.
- Apply best practices for reliability, traceability, and explainability in sensitive environments.
- Deploy, monitor, and validate LangGraph applications in healthcare production settings.
Format of the Course
- Interactive lecture and discussion.
- Hands-on exercises with real-world case studies.
- Implementation practice in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
LangGraph for Legal Applications
35 HoursLangGraph serves as a framework designed to construct stateful, multi-agent LLM applications through composable graphs that maintain persistent state and offer precise control over execution processes.
This instructor-led live training, available either online or on-site, targets intermediate to advanced professionals aiming to design, implement, and manage LangGraph-based legal solutions equipped with robust compliance, traceability, and governance controls.
Upon completion of this training, participants will be capable of:
- Designing legal-specific LangGraph workflows that ensure auditability and compliance.
- Integrating legal ontologies and document standards into graph state and processing mechanisms.
- Implementing guardrails, human-in-the-loop approvals, and traceable decision pathways.
- Deploying, monitoring, and maintaining LangGraph services in production environments with effective observability and cost management.
Format of the Course
- Interactive lectures and discussions.
- Extensive exercises and practical application.
- Hands-on implementation within a live laboratory environment.
Course Customization Options
- To request a customized training session for this course, please contact us to make arrangements.
Building Dynamic Workflows with LangGraph and LLM Agents
14 HoursLangGraph serves as a framework designed for constructing graph-based LLM workflows that enable branching, tool utilization, memory management, and controlled execution.
This instructor-led, live training session, available both online and onsite, targets intermediate-level engineers and product teams aiming to merge LangGraph’s graph logic with LLM agent loops to create dynamic, context-aware applications such as customer support assistants, decision trees, and information retrieval systems.
Upon completion of this training, participants will be capable of:
- Designing graph-based workflows that effectively coordinate LLM agents, tools, and memory.
- Implementing conditional routing, retries, and fallback mechanisms to ensure robust execution.
- Integrating retrieval processes, APIs, and structured outputs into agent loops.
- Evaluating, monitoring, and securing agent behavior to enhance reliability and safety.
Course Format
- Interactive lectures accompanied by facilitated discussions.
- Guided laboratory exercises and code walkthroughs within a sandbox environment.
- Scenario-based design exercises and peer reviews.
Customization Options for the Course
- To request customized training for this course, please contact us to arrange.
LangGraph for Marketing Automation
14 HoursLangGraph is a graph-based orchestration framework designed to facilitate conditional, multi-step workflows involving LLMs and tools, making it ideal for automating and personalizing content pipelines.
This instructor-led live training (available online or onsite) is tailored for intermediate-level marketers, content strategists, and automation developers aiming to implement dynamic, branching email campaigns and content generation pipelines using LangGraph.
Upon completion of this training, participants will be capable of:
- Designing graph-structured content and email workflows that incorporate conditional logic.
- Integrating LLMs, APIs, and data sources to enable automated personalization.
- Managing state, memory, and context across multi-step campaigns.
- Evaluating, monitoring, and optimizing workflow performance and delivery outcomes.
Course Format
- Interactive lectures and group discussions.
- Hands-on labs focused on implementing email workflows and content pipelines.
- Scenario-based exercises covering personalization, segmentation, and branching logic.
Course Customization Options
- For requests regarding customized training for this course, please contact us to arrange details.
Le Chat Enterprise: Private ChatOps, Integrations & Admin Controls
14 HoursLe Chat Enterprise offers a private ChatOps solution that delivers secure, customizable, and governed conversational AI capabilities for organizations, including support for RBAC, SSO, connectors, and enterprise app integrations.
This instructor-led, live training (online or onsite) targets intermediate-level product managers, IT leads, solution engineers, and security/compliance teams who want to deploy, configure, and govern Le Chat Enterprise in enterprise environments.
By the end of this training, participants will be able to:
- Set up and configure Le Chat Enterprise for secure deployments.
- Enable RBAC, SSO, and compliance-driven controls.
- Integrate Le Chat with enterprise applications and data stores.
- Design and implement governance and admin playbooks for ChatOps.
Format of the Course
- Interactive lecture and discussion.
- Many exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Cost-Effective LLM Architectures: Mistral at Scale (Performance / Cost Engineering)
14 HoursMistral is a family of high-performance large language models, specifically engineered for cost-effective, large-scale production deployment.
This instructor-led live training, available online or onsite, is designed for advanced infrastructure engineers, cloud architects, and MLOps leaders who aim to design, deploy, and optimize Mistral-based architectures to achieve maximum throughput while minimizing costs.
Upon completing this training, participants will be capable of:
- Implementing scalable deployment patterns for Mistral Medium 3.
- Applying batching, quantization, and efficient serving strategies.
- Optimizing inference costs without compromising performance.
- Designing production-ready serving topologies for enterprise workloads.
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practice sessions.
- Hands-on implementation within a live laboratory environment.
Customization Options
- To request customized training for this course, please contact us to make arrangements.
Productizing Conversational Assistants with Mistral Connectors & Integrations
14 HoursMistral AI is an open AI platform that enables teams to build and integrate conversational assistants into enterprise and customer-facing workflows.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level product managers, full-stack developers, and integration engineers who wish to design, integrate, and productize conversational assistants using Mistral connectors and integrations.
By the end of this training, participants will be able to:
- Integrate Mistral conversational models with enterprise and SaaS connectors.
- Implement retrieval-augmented generation (RAG) for grounded responses.
- Design UX patterns for internal and external chat assistants.
- Deploy assistants into product workflows for real-world use cases.
Format of the Course
- Interactive lecture and discussion.
- Hands-on integration exercises.
- Live-lab development of conversational assistants.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Enterprise-Grade Deployments with Mistral Medium 3
14 HoursMistral Medium 3 is a high-performance, multimodal large language model engineered for robust, production-ready deployment within enterprise settings.
This instructor-led training session, available either online or onsite, targets intermediate to advanced AI/ML engineers, platform architects, and MLOps specialists looking to deploy, optimize, and secure Mistral Medium 3 for business applications.
Upon completion of this training, participants will be equipped to:
- Deploy Mistral Medium 3 via API or by self-hosting.
- Enhance inference performance while managing costs.
- Develop multimodal applications using Mistral Medium 3.
- Apply security and compliance standards suitable for enterprise environments.
Course Format
- Interactive lectures and discussions.
- Extensive practical exercises.
- Hands-on implementation within a live lab environment.
Customization Options
- For customized training solutions, please reach out to us to arrange details.
Mistral for Responsible AI: Privacy, Data Residency & Enterprise Controls
14 HoursMistral AI offers an open and enterprise-ready AI platform equipped with features designed to facilitate secure, compliant, and responsible AI deployment.
This instructor-led training, available both online and onsite, is tailored for intermediate-level compliance leads, security architects, and legal/operations stakeholders who aim to implement responsible AI practices using Mistral by leveraging privacy, data residency, and enterprise control mechanisms.
Upon completion of this training, participants will be able to:
- Implement privacy-preserving techniques within Mistral deployments.
- Apply data residency strategies to satisfy regulatory requirements.
- Establish enterprise-grade controls, including RBAC, SSO, and audit logs.
- Evaluate vendor and deployment options to ensure compliance alignment.
Format of the Course
- Interactive lecture and discussion.
- Compliance-focused case studies and exercises.
- Hands-on implementation of enterprise AI controls.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.