Generative AI
for
Code
From GitHub Copilot to Code Llama — AI models that understand context, predict intent, and write code alongside you in real time.
with AI assistance
with Copilot
parameters
languages supported
Meet the
leading tools
A new generation of AI models trained on billions of lines of code — transforming how developers write, review, and ship software.
The most widely adopted AI coding assistant. Trained on public GitHub repos, it suggests entire functions, writes tests, and explains code inline — directly in your editor.
A family of open models (7B–70B) built on Llama 2, fine-tuned for code generation, infilling, and instruction following. Runs locally — full privacy, no API costs.
Formerly CodeWhisperer. Deeply integrated with the AWS ecosystem — autocompletes code, scans for vulnerabilities, and suggests IaC patterns for cloud-native apps.
Enterprise-ready AI completion with on-prem deployment options. Learns from your team’s codebase to provide context-aware suggestions that match your style and patterns.
An AI-native code editor forked from VS Code. Chat with your codebase, apply multi-file edits with one prompt, and reason over entire repos using GPT-4 or Claude.
An agentic coding assistant running directly in the browser. Write a prompt, and Ghostwriter plans, codes, debugs, and deploys a working app — end-to-end, no setup required.
How generative
code AI works
A transformer-based model trained on massive code corpora learns to predict tokens — characters, keywords, symbols — in sequence, producing coherent programs.
Models like Code Llama ingest hundreds of gigabytes of public source code from GitHub, Stack Overflow, and documentation. The model learns syntax, idioms, API usage, and common patterns across dozens of languages through next-token prediction.
At inference time, the model receives your open file as context — cursor position, surrounding code, comments, and imports. Fill-in-the-Middle (FIM) training lets models complete code in the middle of existing blocks, not just at the end.
To make models follow natural language instructions (“write a unit test for this function”), they are fine-tuned with human feedback (RLHF) and instruction datasets. This bridges the gap between autocomplete and interactive chat-based coding agents.
Language Server Protocol (LSP) extensions stream model outputs directly into your editor as ghost text. Accepted suggestions become part of your codebase — and the loop continues with every keystroke and save.
AI completion
in action
A developer types a function signature and a comment. The AI generates the full implementation — highlighted below in purple.
# Fetch paginated results from a REST endpoint and collect all pages import requests from typing import Generator, Any def fetch_all_pages( url: str, params: dict = None, page_key: str = "page", results_key: str = "results", ) -> Generator[Any, None, None]: """✦ Copilot suggestion — press Tab to accept""" page = 1 while True: response = requests.get(url, params={**(params or {}), page_key: page}) response.raise_for_status() data = response.json() items = data.get(results_key, []) if not items: break yield from items page += 1

