Essential AI Frameworks
Essential AI Frameworks
Python & Essential AI Frameworks
// Python AI Stack 2025

Python & Essential AI
Frameworks

A comprehensive developer reference with real code examples for building intelligent applications

8
Frameworks
30+
Code Examples
#1
AI Language
3.12
Python Version
Python Basics

Core Python concepts every AI developer must know — from data structures to OOP and functional programming.

Variables & Data Types
PYTHON
# Primitive types
name = "Claude"         # str
score = 98.6            # float
count = 42              # int
active = True           # bool

# Collections
nums = [1, 2, 3]        # list
pair = (10, 20)          # tuple
info = {"k": "v"}       # dict
tags = {"a", "b"}       # set

# Type hints (modern Python)
def greet(name: str) -> str:
    return f"Hello, {name}!"
List Comprehensions
PYTHON
# Basic comprehension
squares = [x**2 for x in range(10)]

# With condition
evens = [x for x in range(20) if x % 2 == 0]

# Dict comprehension
word_len = {w: len(w) for w in "AI ML DL".split()}

# Generator (memory efficient)
gen = (x**2 for x in range(1_000_000))
next(gen)  # 0 — lazy evaluation
Classes & OOP
PYTHON
from dataclasses import dataclass

@dataclass
class Model:
    name: str
    layers: int
    trained: bool = False

    def train(self, epochs: int) -> None:
        print(f"Training {self.name}...")
        self.trained = True

    def __repr__(self):
        return f"Model({self.name}, {self.layers}L)"

gpt = Model("GPT", 96)
gpt.train(epochs=100)
Decorators & Context Managers
PYTHON
import time, functools

def timer(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        t0 = time.perf_counter()
        result = func(*args, **kwargs)
        dt = time.perf_counter() - t0
        print(f"{func.__name__}: {dt:.3f}s")
        return result
    return wrapper

@timer
def train_model():
    time.sleep(0.1)  # simulate work
NumPy

The backbone of scientific computing in Python. Fast N-dimensional arrays, broadcasting, and linear algebra.

🔢
Array Operations
Core API
Create and manipulate N-dimensional arrays with vectorized math — no Python loops needed.
NUMPY
import numpy as np

# Create arrays
a = np.array([[1,2,3],[4,5,6]])  # shape (2,3)
z = np.zeros((3,3))
r = np.random.randn(100, 10)

# Broadcasting
b = a * 2           # element-wise
c = a @ a.T         # matrix multiply

# Aggregations
a.mean(axis=0)      # column means
a.std()             # std deviation
np.linalg.norm(a)   # Frobenius norm
Linear Algebra Signal Processing Statistics
📐
Advanced Indexing
Slicing
NumPy’s powerful slicing, boolean masking, and fancy indexing to efficiently select and transform data.
NUMPY
X = np.arange(24).reshape(4, 6)

# Slicing
X[1:3, 2:5]         # rows 1-2, cols 2-4
X[:, ::2]           # every other column

# Boolean mask
mask = X > 10
X[mask] = 0          # zero out values >10

# Fancy indexing
idx = [0, 2, 3]
X[idx]               # rows 0, 2, 3

# Reshape & stack
np.vstack([X, X])   # (8, 6)
X.flatten()         # 1D view
Data Preprocessing Feature Extraction
Pandas

The essential library for data wrangling, cleaning, and analysis with DataFrames and Series.

🐼
DataFrame Fundamentals
Core
Load, inspect, and filter tabular data with intuitive, expressive APIs.
PANDAS
import pandas as pd

# Load data
df = pd.read_csv("data.csv")
df.head()             # first 5 rows
df.info()             # dtypes & nulls
df.describe()         # stats summary

# Select & filter
df["age"]            # Series
df[["name","age"]]  # DataFrame
df[df["age"] > 30]  # boolean filter

# Chained operations
result = (
    df.dropna()
      .query("score > 0.8")
      .sort_values("score", ascending=False)
      .head(10)
)
📊
GroupBy & Aggregation
Analysis
Split-apply-combine operations for powerful data aggregation and transformation.
PANDAS
# GroupBy aggregations
stats = df.groupby("category").agg(
    mean_score=("score", "mean"),
    count=("id", "count"),
    max_val=("value", "max")
)

# Pivot table
pivot = df.pivot_table(
    values="sales",
    index="region",
    columns="quarter",
    aggfunc="sum"
)

# Apply custom function
df["norm"] = df["score"].apply(
    lambda x: (x - x.min()) / (x.max() - x.min())
)
Scikit-learn

The gold standard ML library for classical machine learning — consistent API, powerful pipelines, and battle-tested algorithms.

🤖
Training & Evaluation
ML Pipeline
Fit models, evaluate with cross-validation, and compare algorithms with a consistent scikit-learn API.
SCIKIT-LEARN
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import classification_report
from sklearn.preprocessing import StandardScaler

# Split data
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Preprocess + train
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test  = scaler.transform(X_test)

clf = RandomForestClassifier(n_estimators=200)
clf.fit(X_train, y_train)

# Evaluate
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
🔧
Pipelines & GridSearch
Optimization
Chain preprocessing and modeling steps. Use GridSearchCV or RandomizedSearchCV for hyperparameter tuning.
SCIKIT-LEARN
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV

pipe = Pipeline([
    ("scaler", StandardScaler()),
    ("clf", RandomForestClassifier())
])

param_grid = {
    "clf__n_estimators": [50, 100, 200],
    "clf__max_depth": [None, 5, 10],
}

search = GridSearchCV(pipe, param_grid, cv=5, n_jobs=-1)
search.fit(X_train, y_train)

print("Best params:", search.best_params_)
print("Best score:", search.best_score_)
TensorFlow / Keras

Google’s production-grade deep learning framework with the Keras high-level API for building neural networks.

🧠
Build a Neural Network
Keras API
Define, compile, and train neural networks using Keras’s Sequential or Functional API.
TENSORFLOW / KERAS
import tensorflow as tf
from tensorflow import keras

# Build model
model = keras.Sequential([
    keras.layers.Input(shape=(784,)),
    keras.layers.Dense(256, activation="relu"),
    keras.layers.Dropout(0.3),
    keras.layers.Dense(128, activation="relu"),
    keras.layers.Dense(10, activation="softmax")
])

# Compile
model.compile(
    optimizer="adam",
    loss="sparse_categorical_crossentropy",
    metrics=["accuracy"]
)

# Train
history = model.fit(
    X_train, y_train,
    epochs=20, batch_size=64,
    validation_split=0.2
)
🖼️
CNN for Image Classification
Computer Vision
Convolutional neural networks for visual tasks — image classification, object detection, segmentation.
TENSORFLOW / KERAS
cnn = keras.Sequential([
    keras.layers.Conv2D(32, (3,3), activation="relu",
                         input_shape=(32,32,3)),
    keras.layers.MaxPooling2D(),
    keras.layers.Conv2D(64, (3,3), activation="relu"),
    keras.layers.GlobalAveragePooling2D(),
    keras.layers.Dense(10, activation="softmax")
])

# Transfer learning
base = keras.applications.MobileNetV3Small(
    include_top=False, weights="imagenet"
)
base.trainable = False  # freeze backbone
PyTorch

Meta’s dynamic computation graph framework — the de facto standard for AI research and modern deep learning.

🔥
Custom nn.Module
Neural Nets
Define fully custom architectures by subclassing nn.Module. Control every computation in the forward pass.
PYTORCH
import torch
import torch.nn as nn

class MLP(nn.Module):
    def __init__(self, in_dim, hidden, out_dim):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(in_dim, hidden),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(hidden, out_dim),
        )

    def forward(self, x):
        return self.net(x)

model = MLP(784, 256, 10).to("cuda")
optim = torch.optim.AdamW(model.parameters(), lr=3e-4)
🔄
Training Loop
Autograd
PyTorch’s explicit training loop with autograd — full control over forward, backward, and optimizer steps.
PYTORCH
criterion = nn.CrossEntropyLoss()

for epoch in range(100):
    model.train()
    for X_batch, y_batch in train_loader:
        X_batch = X_batch.to("cuda")
        y_batch = y_batch.to("cuda")

        optim.zero_grad()
        preds = model(X_batch)
        loss = criterion(preds, y_batch)
        loss.backward()
        optim.step()

    # Eval loop
    model.eval()
    with torch.no_grad():
        acc = evaluate(model, val_loader)
🤗 HuggingFace Transformers

The model hub and library powering modern NLP — BERT, GPT, T5, LLaMA, and thousands of pretrained models.

📝
Text Pipeline
NLP
Use any pretrained model for classification, generation, summarization, and translation in 3 lines.
TRANSFORMERS
from transformers import pipeline

# Sentiment analysis
clf = pipeline("sentiment-analysis")
clf("I love building AI apps!")
# [{'label': 'POSITIVE', 'score': 0.999}]

# Text generation
gen = pipeline("text-generation",
              model="mistralai/Mistral-7B-v0.1")
out = gen("Once upon a time", max_new_tokens=100)

# Zero-shot classification
zs = pipeline("zero-shot-classification")
zs("Python is great for AI",
   candidate_labels=["coding", "sports", "science"])
🎯
Fine-tuning with Trainer
Training
Fine-tune any HuggingFace model on custom datasets using the high-level Trainer API with minimal boilerplate.
TRANSFORMERS
from transformers import (
    AutoModelForSequenceClassification,
    AutoTokenizer, Trainer, TrainingArguments
)

model = AutoModelForSequenceClassification.from_pretrained(
    "distilbert-base-uncased", num_labels=2
)

args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    per_device_train_batch_size=16,
    learning_rate=2e-5,
    evaluation_strategy="epoch"
)

trainer = Trainer(
    model=model, args=args,
    train_dataset=train_ds, eval_dataset=val_ds
)
trainer.train()
LangChain

The framework for building LLM-powered applications — chains, agents, RAG pipelines, and memory management.

⛓️
RAG Pipeline
Retrieval-Augmented
Build a Retrieval-Augmented Generation system that queries your documents with an LLM.
LANGCHAIN
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain.chains import create_retrieval_chain
from langchain_core.prompts import ChatPromptTemplate

# Build vector store from docs
vectorstore = FAISS.from_documents(
    docs, OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()

prompt = ChatPromptTemplate.from_template("""
Answer using context: {context}
Question: {input}""")

chain = create_retrieval_chain(
    retriever,
    prompt | ChatOpenAI(model="gpt-4o")
)
chain.invoke({"input": "What is RAG?"})
🤖
Agent with Tools
Agents
Create autonomous LLM agents that can use tools like web search, code execution, and custom APIs.
LANGCHAIN
from langchain.agents import create_react_agent, AgentExecutor
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_experimental.tools import PythonREPLTool

tools = [DuckDuckGoSearchRun(), PythonREPLTool()]
llm = ChatOpenAI(model="gpt-4o", temperature=0)

agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(
    agent=agent, tools=tools, verbose=True
)

executor.invoke({
    "input": "Search for latest AI papers and summarize them"
})
Framework Comparison

Choose the right tool for your use case — from data preprocessing to deploying production AI systems.

Framework Primary Use Learning Curve Best For Ecosystem
NumPy
Numerical Computing
Array math, linear algebra Scipy, Matplotlib
Pandas
Data Analysis
Tabular data, EDA NumPy, Seaborn
Scikit-learn
Classical ML
Regression, classification, clustering Pandas, NumPy
TensorFlow
Deep Learning
Production CV & NLP TFX, TF Serving
PyTorch
Research DL
Custom architectures, research Lightning, HuggingFace
HuggingFace
Transformers
NLP, fine-tuning LLMs PyTorch, TensorFlow
LangChain
LLM Apps
RAG, agents, chatbots OpenAI, Anthropic, FAISS
Built for developers · Python 3.12 · 2025
numpy · pandas · scikit-learn · tensorflow · pytorch · transformers · langchain

Leave a Reply

Your email address will not be published. Required fields are marked *