diff --git a/examples/clusters/crusoe/README.md b/examples/clusters/crusoe/README.md index 17b1b5f01..fb6f4b1a7 100644 --- a/examples/clusters/crusoe/README.md +++ b/examples/clusters/crusoe/README.md @@ -6,12 +6,12 @@ title: Distributed workload orchestration on Crusoe with dstack Crusoe offers two ways to use clusters with fast interconnect: -* [Kubernetes](#kubernetes) – Lets you interact with clusters through the Kubernetes API and includes support for NVIDIA GPU operators and related tools. -* [Virtual Machines (VMs)](#vms) – Gives you direct access to clusters in the form of virtual machines. +* [Crusoe Managed Kubernetes](#kubernetes) – Lets you interact with clusters through the Kubernetes API and includes support for NVIDIA and AMD GPU operators and related tools. +* [Virtual Machines (VMs)](#vms) – Gives you direct access to clusters in the form of virtual machines with NVIDIA and AMD GPUs. Both options use the same underlying networking infrastructure. This example walks you through how to set up Crusoe clusters to use with `dstack`. -## Kubernetes +## Crusoe Managed Kubernetes { #kubernetes } !!! info "Prerequsisites" 1. Go `Networking` → `Firewall Rules`, click `Create Firewall Rule`, and allow ingress traffic on port `30022`. This port will be used by the `dstack` server to access the jump host. @@ -21,7 +21,7 @@ Both options use the same underlying networking infrastructure. This example wal ### Configure the backend -Follow the standard instructions for setting up a [Kubernetes](https://dstack.ai/docs/concepts/backends/#kubernetes) backend: +Follow the standard instructions for setting up a [`kubernetes`](https://dstack.ai/docs/concepts/backends/#kubernetes) backend: