From 2af9cd30b1b155a0d01b8140b8b1926aa1bb518e Mon Sep 17 00:00:00 2001 From: Eduardo Patrocinio Date: Tue, 9 Dec 2025 10:45:21 -0500 Subject: [PATCH] Fix deprecated tutorial links in DDP tutorial Replace deprecated dist_tuto.html links with current recommended resources: - Link to PyTorch Distributed Overview for process group setup - Link to official torch.distributed docs instead of old tutorial Fixes #3526 --- intermediate_source/ddp_tutorial.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/intermediate_source/ddp_tutorial.rst b/intermediate_source/ddp_tutorial.rst index c63321ad14c..97fd99f700e 100644 --- a/intermediate_source/ddp_tutorial.rst +++ b/intermediate_source/ddp_tutorial.rst @@ -20,7 +20,7 @@ multiple machines, making it perfect for large-scale deep learning applications. To use DDP, you'll need to spawn multiple processes and create a single instance of DDP per process. But how does it work? DDP uses collective communications from the -`torch.distributed `__ +`torch.distributed `__ package to synchronize gradients and buffers across all processes. This means that each process will have its own copy of the model, but they'll all work together to train the model as if it were on a single machine. @@ -71,7 +71,7 @@ Basic Use Case To create a DDP module, you must first set up process groups properly. More details can be found in -`Writing Distributed Applications with PyTorch `__. +`PyTorch Distributed Overview <../beginner/dist_overview.html>`__. .. code:: python