torch-pipeline-parallelism - Letta Skills
Description
This skill provides guidance for implementing PyTorch pipeline parallelism for distributed training of large language models. It should be used when implementing pipeline parallel training loops, partitioning transformer models across GPUs, or working with AFAB (All-Forward-All-Backward) scheduling patterns. The skill covers model partitioning, inter-rank communication, gradient flow management, and common pitfalls in distributed training implementations.
Type
Skill
Ecosystem
Cross-platform
Trust Score
86%
Related Skills
Stable Diffusion WebUI
Feature-rich web interface for Stable Diffusion image generation by AUTOMATIC1111.
LangChain
Comprehensive framework for building LLM-powered applications with chains, agents, and retrieval.
LobeChat
Modern, extensible AI chat framework with plugin ecosystem and multi-model support.
Open WebUI
Self-hosted web UI for LLMs with multi-model support, RAG, and plugin system.