Setting Up Your
ML Environment
A complete guide to installing Python, PyTorch, TensorFlow, and Hugging Face — so you can start building models from day one.
Python is the lingua franca of machine learning. Always use a virtual environment to keep your projects isolated and reproducible.
- Download Python 3.10 or newer from
python.org— checkpython3 --versionto confirm. - Create a virtual environment to sandbox your dependencies.
- Activate it and install your packages inside it.
# Create & activate a virtual environment python3 -m venv ml_env source ml_env/bin/activate # macOS / Linux # ml_env\Scripts\activate # Windows # Upgrade pip pip install --upgrade pip
conda (via Miniconda) if you need tighter control over native dependencies like CUDA libraries — it handles binary packages elegantly.
PyTorch is beloved for its intuitive, Pythonic API and dynamic computation graph — perfect for research and production alike.
# CPU-only (simplest install) pip install torch torchvision torchaudio # CUDA 12.1 (NVIDIA GPU acceleration) pip install torch torchvision torchaudio \ --index-url https://download.pytorch.org/whl/cu121
import torch # Check install & GPU availability print(f"PyTorch {torch.__version__}") print(f"CUDA available: {torch.cuda.is_available()}") # Quick tensor test x = torch.rand(3, 3) print(x)
TensorFlow excels at production deployments — especially with TF Serving, TFLite for mobile, and its mature Keras high-level API.
# Install TensorFlow (includes GPU support automatically) pip install tensorflow # Apple Silicon users pip install tensorflow-macos tensorflow-metal
import tensorflow as tf print(f"TensorFlow {tf.__version__}") print("GPUs:", tf.config.list_physical_devices('GPU')) # Build a quick model with Keras model = tf.keras.Sequential([ tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax'), ])
python==3.11 if you hit compatibility issues.
Hugging Face gives you instant access to thousands of pre-trained models — BERT, GPT-2, Llama, Whisper and more — with just a few lines of code.
# Core libraries pip install transformers datasets accelerate # Tokenizers & evaluation pip install tokenizers evaluate # Login to Hugging Face Hub (for gated models) huggingface-cli login
from transformers import pipeline # Zero-shot text classification in 3 lines classifier = pipeline( "zero-shot-classification", model="facebook/bart-large-mnli" ) result = classifier( "This tutorial covers GPU setup", candidate_labels=["tech", "cooking", "sports"] ) print(result['labels'][0]) # → "tech"
Your environment is ready. Use pip freeze > requirements.txt to snapshot dependencies and share reproducible setups with your team.

