Skip to content

Quick Start Guide

This guide will help you quickly create your first ML workspace and run a sample project using Trailer.dev.

Prerequisites

  1. Install Trailer.dev

    bash
    # Pull the standalone container
    docker pull ghcr.io/trailer-dev/trailer:latest
  2. Start Trailer.dev

    bash
    # Run in standalone mode
    docker run -p 8090:8090 --gpus all ghcr.io/trailer-dev/trailer:latest
  3. Access the Web Interface

    • Open your browser and navigate to http://localhost:8090
    • Log in with your credentials

Create Your First Workspace

  1. Create a New Workspace

    • Click "New Workspace" in the web interface
    • Choose a name for your workspace
    • Select "Python ML" as the template
  2. Configure Python Environment

  3. Set Up GPU Support

  4. Run a Sample Project

    python
    # sample.py
    import torch
    from transformers import AutoModel, AutoTokenizer
    
    # Load model and tokenizer
    model = AutoModel.from_pretrained("bert-base-uncased")
    tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
    
    # Test GPU availability
    print(f"GPU available: {torch.cuda.is_available()}")
    if torch.cuda.is_available():
        print(f"GPU device: {torch.cuda.get_device_name(0)}")
  5. Save and Share Workspace

Next Steps

  1. Explore Features

  2. Get Support

Released under the MIT License.