Welcome! This guide gets you from zero to a working AbstractFramework setup in minutes.
AbstractFramework is modular — you can install the full framework in one command or pick specific packages. Whether you want a simple LLM API, a durable coding assistant, a visual workflow editor, voice-enabled agents, or a full production gateway — there's a path for you.
New here? Start with Path 0 to install everything, then jump to the path that matches what you want to build.
| Your Goal | Start Here | What You'll Use |
|---|---|---|
| Install the full, pinned framework release | Path 0 | abstractframework==0.1.2 |
| Call LLMs with a unified API | Path 1 | abstractcore |
| Build a local coding assistant | Path 2 | abstractcode |
| Create durable workflows | Path 3 | abstractruntime |
| Deploy a remote run gateway | Path 4 | abstractgateway + abstractobserver |
| Use agent patterns (ReAct, etc.) | Path 5 | abstractagent |
| Add voice/audio to AbstractCore | Path 6 | abstractcore + abstractvoice (plugin) |
| Add image generation to AbstractCore | Path 7 | abstractcore + abstractvision (plugin) |
| Add music generation to AbstractCore | Path 7a | abstractcore + abstractmusic (plugin) |
| Build a knowledge graph | Path 8 | abstractmemory + abstractsemantics |
| macOS menu bar assistant | Path 9 | abstractassistant |
| Visual workflow editor (browser) | Path 10 | @abstractframework/flow |
| Browser-based coding assistant | Path 11 | @abstractframework/code |
| Create a specialized agent | Path 12 | abstractflow + clients |
| Integrate external tools via MCP | Path 13 | abstractcore |
| Use structured output (Pydantic) | Path 14 | abstractcore |
| Run an OpenAI-compatible API server | Path 15 | abstractcore[server] |
Install the pinned global release profile in one command:
pip install "abstractframework==0.1.2"This installs all framework Python packages together, including:
| Package | Version |
|---|---|
abstractcore |
2.12.0 |
abstractruntime |
0.4.2 |
abstractagent |
0.3.1 |
abstractflow |
0.3.7 (editor) |
abstractcode |
0.3.6 |
abstractgateway |
0.1.0 |
abstractmemory |
0.0.2 |
abstractsemantics |
0.0.2 |
abstractvoice |
0.6.3 |
abstractvision |
0.2.1 |
abstractassistant |
0.4.2 |
abstractcore is installed with openai,anthropic,huggingface,embeddings,tokens,tools,media,compression,server.
Use this path when you want a fully functional setup with minimal decision overhead.
After install, configure and verify your setup:
# Interactive guided setup (model, base URL, vision, API keys, audio, video, embeddings, logging)
abstractcore --config
# Check readiness + download missing models
abstractcore --installPython: 3.10 or newer
Node.js: 18+ (only for browser UIs)
An LLM Backend (pick one):
- Local (recommended): Ollama, LM Studio, vLLM, llama.cpp, LocalAI
- Cloud: OpenAI, Anthropic, Google, Groq, Together AI, Mistral
The simplest path. Use AbstractCore as a unified LLM client.
pip install abstractcoreLocal with Ollama (free, no API key):
ollama serve
ollama pull qwen3:4b-instruct
export OLLAMA_HOST="http://localhost:11434"Or with LM Studio (OpenAI-compatible):
export OPENAI_BASE_URL="http://127.0.0.1:1234/v1"
export OPENAI_API_KEY="local"Or with Cloud APIs:
export OPENAI_API_KEY="sk-..."
# or
export ANTHROPIC_API_KEY="sk-ant-..."from abstractcore import create_llm
# Local
llm = create_llm("ollama", model="qwen3:4b-instruct")
# Or cloud
# llm = create_llm("openai", model="gpt-4o")
# llm = create_llm("anthropic", model="claude-3-5-sonnet-latest")
response = llm.generate("What is durable execution?")
print(response.content)Next: Add tool calling or structured output.
Get a durable coding assistant running in your terminal.
pip install abstractcodeollama serve
ollama pull qwen3:1.7b-q4_K_M
export OLLAMA_HOST="http://localhost:11434"abstractcode --provider ollama --model qwen3:1.7b-q4_K_M- Type
/helpfor all commands - Mention files with
@path/to/filein your prompts - Tool execution requires approval by default (toggle with
/auto-accept)
Durability Note: Sessions persist across restarts — close and reopen, your full context is preserved (conversation history, tool calls, state). To start fresh: type
/clear
Next: See AbstractCode docs.
Build workflows that survive crashes and can pause/resume.
pip install abstractruntime
# Add LLM integration:
pip install "abstractruntime[abstractcore]"- Run: A durable workflow instance
- Ledger: Append-only log of everything that happened
- Effect: A request for something to happen (LLM call, tool call, timer)
- Wait: An explicit pause point (state is checkpointed)
from abstractruntime import (
Effect, EffectType, Runtime, StepPlan, WorkflowSpec,
InMemoryLedgerStore, InMemoryRunStore
)
# Define workflow nodes
def ask(run, ctx):
return StepPlan(
node_id="ask",
effect=Effect(
type=EffectType.ASK_USER,
payload={"prompt": "What would you like to do?"},
result_key="user_input",
),
next_node="done",
)
def done(run, ctx):
return StepPlan(node_id="done", complete_output={"answer": run.vars.get("user_input")})
# Create workflow and runtime
wf = WorkflowSpec(workflow_id="demo", entry_node="ask", nodes={"ask": ask, "done": done})
rt = Runtime(run_store=InMemoryRunStore(), ledger_store=InMemoryLedgerStore())
# Start and tick
run_id = rt.start(workflow=wf)
state = rt.tick(workflow=wf, run_id=run_id)
print(state.status.value) # "waiting"
# Resume with user input
state = rt.resume(workflow=wf, run_id=run_id, wait_key=state.waiting.wait_key, payload={"text": "Hello!"})
print(state.status.value) # "completed"Next: See AbstractRuntime docs.
Deploy a remote control plane and observe runs in your browser.
pip install "abstractgateway"
# If your workflows use LLM/tools:
pip install "abstractruntime[abstractcore]>=0.4.0"# Required: authentication token
export ABSTRACTGATEWAY_AUTH_TOKEN="$(python -c 'import secrets; print(secrets.token_urlsafe(32))')"
# Required: CORS for browser access
export ABSTRACTGATEWAY_ALLOWED_ORIGINS="http://localhost:*,http://127.0.0.1:*"
# Workflow source
export ABSTRACTGATEWAY_WORKFLOW_SOURCE=bundle
export ABSTRACTGATEWAY_FLOWS_DIR="/path/to/your/bundles"
export ABSTRACTGATEWAY_DATA_DIR="$PWD/runtime/gateway"abstractgateway serve --host 127.0.0.1 --port 8080Verify it's running:
curl -sS "http://127.0.0.1:8080/api/health"In another terminal:
npx @abstractframework/observerOpen http://localhost:3001 in your browser:
- Set Gateway URL to
http://127.0.0.1:8080 - Paste your Auth Token
- Click Connect
You're now observing your runs.
Next: See AbstractGateway docs.
Use ready-made agent loops (ReAct, CodeAct, MemAct).
pip install abstractagentfrom abstractagent import create_react_agent
agent = create_react_agent(provider="ollama", model="qwen3:4b-instruct")
agent.start("List the files in the current directory")
state = agent.run_to_completion()
print(state.output["answer"])Next: See AbstractAgent docs.
Add speech-to-text and text-to-speech capabilities to AbstractCore.
Note: AbstractVoice is a capability plugin for AbstractCore. Once installed, it exposes
llm.voice(TTS) andllm.audio(STT) on any LLM instance, keeping AbstractCore lightweight by default.
pip install abstractcore abstractvoiceAbstractVoice is offline-first — prefetch models explicitly:
abstractvoice-prefetch --stt small
abstractvoice-prefetch --piper enfrom abstractcore import create_llm
llm = create_llm("ollama", model="qwen3:4b-instruct")
# Check available capabilities
print(llm.capabilities.status())
# Text-to-speech via capability
wav_bytes = llm.voice.tts("Hello from AbstractCore!", format="wav")
# Speech-to-text via capability
text = llm.audio.transcribe("audio.wav", language="en")
print(text)
# Audio in LLM requests (transcribed automatically)
response = llm.generate(
"Summarize the key points from this call.",
media=["meeting.wav"],
audio_policy="speech_to_text",
)
print(response.content)You can also use AbstractVoice directly without AbstractCore:
from abstractvoice import VoiceManager
vm = VoiceManager()
# Text-to-speech
vm.speak("Hello! This is AbstractVoice.")
# Speech-to-text (from file)
text = vm.transcribe_file("audio.wav")
print(text)abstractvoice --verboseNext: See AbstractVoice docs and AbstractCore Audio & Voice.
Add text-to-image and image-to-image capabilities to AbstractCore.
Note: AbstractVision is a capability plugin for AbstractCore. Once installed, it exposes
llm.visionfor generative image tasks, keeping AbstractCore lightweight by default.
- HuggingFace (recommended) — Local diffusion models via
diffusers - OpenAI-compatible APIs — Any server exposing
/v1/images/generations
Note: Ollama and LM Studio do not currently support image generation models. Use HuggingFace for local image generation.
pip install abstractcore abstractvisionfrom abstractcore import create_llm
llm = create_llm("openai", model="gpt-4o-mini")
# Check available capabilities
print(llm.capabilities.status())
# Text-to-image via capability (requires HF_TOKEN or vision_base_url config)
# png_bytes = llm.vision.t2i("a red square")Configure the vision backend (choose one):
# Option 1: HuggingFace (recommended for local generation)
export HF_TOKEN="hf_..."
# Option 2: OpenAI-compatible server
export ABSTRACTVISION_BASE_URL="http://localhost:7860/v1"You can also use AbstractVision directly for local image generation:
from abstractvision import VisionManager, LocalAssetStore
from abstractvision.backends import HuggingFaceBackend, HuggingFaceBackendConfig
# Configure HuggingFace backend (local diffusion models)
backend = HuggingFaceBackend(
config=HuggingFaceBackendConfig(
model_id="stabilityai/stable-diffusion-xl-base-1.0",
# device="mps", # for Apple Silicon
)
)
vm = VisionManager(backend=backend, store=LocalAssetStore())
# Generate image
result = vm.generate_image("a watercolor painting of a lighthouse")
print(result) # {"$artifact": "...", "content_type": "image/png", ...}# Using HuggingFace
abstractvision t2i --backend huggingface "a photo of a red fox"
# Using OpenAI-compatible server
abstractvision t2i --base-url http://localhost:7860/v1 "a photo of a red fox"Next: See AbstractVision docs and AbstractCore Vision Capabilities.
Add text-to-music capabilities to AbstractCore.
Note: AbstractMusic is a capability plugin for AbstractCore. Once installed, it exposes
llm.musicfor deterministic text-to-music generation, keeping AbstractCore lightweight by default.
pip install abstractcore abstractmusicAbstractMusic generates locally in-process. The default backend is ACE-Step v1.5.
If you switch to the Diffusers backend, model licenses vary by checkpoint. Choose a model compatible with your intended usage.
from abstractcore import create_llm
llm = create_llm(
# Any provider/model works here. The LLM does *not* generate music audio.
# Music generation is performed by the configured AbstractMusic backend (ACE-Step by default).
"ollama",
model="qwen3:4b-instruct",
music_backend="acestep",
music_model_id="ACE-Step/Ace-Step1.5",
)
wav_bytes = llm.music.t2m("uplifting synthwave, 120bpm, catchy chorus", format="wav", duration_s=10.0)
open("out.wav", "wb").write(wav_bytes)Build a temporal, provenance-aware knowledge graph.
pip install abstractmemory
pip install abstractsemantics # Schema registry
# Optional: persistent storage + vector search
pip install "abstractmemory[lancedb]"from abstractmemory import InMemoryTripleStore, TripleAssertion, TripleQuery
store = InMemoryTripleStore()
# Add knowledge
store.add([
TripleAssertion(
subject="Paris",
predicate="is_capital_of",
object="France",
scope="session",
owner_id="sess-1",
)
])
# Query
hits = store.query(TripleQuery(subject="paris", scope="session", owner_id="sess-1"))
print(hits[0].object) # "france"Next: See AbstractMemory docs.
Get a menu bar AI assistant with optional voice.
pip install abstractassistant
# Or with voice support:
pip install "abstractassistant[full]"# Tray mode (menu bar)
assistant tray
# Or single command
assistant run --provider ollama --model qwen3:4b-instruct --prompt "Summarize my changes"Next: See AbstractAssistant docs.
Build and edit visual workflows in your browser.
npx @abstractframework/flowOpen http://localhost:3003 in your browser.
- Drag-and-drop workflow nodes (LLM, tools, conditionals, loops)
- Connect nodes visually
- Test workflows in real-time
- Export as
.flowbundles for deployment
Next: See AbstractFlow docs.
Run the browser-based coding assistant.
You need a running AbstractGateway (see Path 4).
npx @abstractframework/codeOpen http://localhost:3002 in your browser. Configure the gateway URL in the UI settings, then start coding.
Next: See AbstractCode web docs.
Create a specialized agent that runs in any client (terminal, browser, custom apps).
Instead of writing agent logic in code, you:
- Author a visual workflow with the Flow Editor
- Declare an interface contract (
abstractcode.agent.v1) - Run it in any compatible client — no client-specific code needed
Use cases: code reviewers, deep researchers, data analysts, custom assistants.
npx @abstractframework/flowOpen http://localhost:3003 and create a workflow with:
- On Flow Start node (outputs:
provider,model,prompt) - Your agent logic (LLM nodes, tool nodes, conditionals, loops)
- On Flow End node (inputs:
response,success,meta)
Set interfaces: ["abstractcode.agent.v1"] in the workflow properties.
In the editor, export your workflow as a .flow bundle.
Terminal (AbstractCode):
abstractcode --workflow /path/to/my-agent.flowInstall for easy access:
abstractcode workflow install /path/to/my-agent.flow
abstractcode --workflow my-agentDeploy to Gateway:
Copy your .flow bundle to ABSTRACTGATEWAY_FLOWS_DIR. It will appear in:
- Observer's workflow picker
- Code Web UI's workflow picker
- Gateway's
/api/gateway/bundlesdiscovery endpoint
Custom app:
from abstractcode.workflow_agent import WorkflowAgent
agent = WorkflowAgent(flow_ref="/path/to/my-agent.flow")
state = agent.run_to_completion(prompt="Analyze this code...")
print(state.output["response"])Next: See AbstractFlow docs and Interface contracts.
Discover and use tools from external MCP (Model Context Protocol) servers.
MCP is an open protocol for connecting LLMs to external tool providers. AbstractCore supports both HTTP and stdio MCP servers, letting you integrate external tool ecosystems without writing adapter code.
pip install abstractcorefrom abstractcore import create_llm
llm = create_llm("openai", model="gpt-4o-mini")
# MCP tools are discovered and presented alongside local tools
response = llm.generate(
"Search for recent Python releases",
mcp_servers=[{"url": "http://localhost:3000/mcp"}],
)MCP tools integrate seamlessly with AbstractRuntime's durable execution: they participate in the same approval boundaries, ledger logging, and replay semantics as any other tool.
Next: See AbstractCore MCP docs.
Extract structured data from any LLM using Pydantic models.
pip install abstractcorefrom pydantic import BaseModel
from abstractcore import create_llm
class Analysis(BaseModel):
title: str
key_points: list[str]
confidence: float
llm = create_llm("openai", model="gpt-4o-mini")
result = llm.generate(
"Analyze the pros and cons of microservices architecture.",
response_model=Analysis,
)
print(result.title)
print(result.key_points)AbstractCore uses provider-aware strategies — native JSON mode where available, with automatic retry and fallback for models that need it.
Next: See AbstractCore Structured Output docs.
Turn AbstractCore into a multi-provider OpenAI-compatible API server. Route requests to any backend via model="provider/model".
pip install "abstractcore[server]"python -m abstractcore.server.appfrom openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
resp = client.chat.completions.create(
model="ollama/qwen3:4b-instruct",
messages=[{"role": "user", "content": "Hello from the gateway!"}],
)
print(resp.choices[0].message.content)The server supports tool calling, media input, and optionally exposes /v1/images/* (via AbstractVision) and /v1/audio/* (via capability plugins like AbstractVoice; plus /v1/audio/music when AbstractMusic is installed) endpoints.
Next: See AbstractCore Server docs.
If you want to work on AbstractFramework itself (contribute, modify, debug), use the source scripts instead of PyPI install.
# Clone AbstractFramework + all 13 sibling repos into a single directory
./scripts/clone.sh
# Or into a specific directory
./scripts/clone.sh ~/dev/abstractframeworkThis clones every package repo as a sibling directory inside the AbstractFramework root. Re-running pulls updates for already-cloned repos.
# Full build — stay in the .venv afterwards (recommended)
source ./scripts/build.sh
# Or without staying in the venv (you'll need to activate manually)
./scripts/build.sh
# Options (combinable):
source ./scripts/build.sh --python # Python packages only
source ./scripts/build.sh --npm # npm UI packages only
source ./scripts/build.sh --clean # delete .venv first (avoids cross-project pollution)build.sh installs every Python package in editable mode (pip install -e) from local checkouts — NOT from PyPI. This means your code changes take effect immediately. Third-party dependencies (pydantic, torch, etc.) are resolved from PyPI normally.
Important:
build.shrequiresclone.shto have been run first — it expects sibling repo directories to exist.
Tip: Use
source(not./) so your shell stays in the.venvafter the build.
Tip: Use
--cleanif you see dependency conflicts from other projects in your.venv. This deletes the venv and creates a fresh one.
For end users who just want to install the published release:
# One-liner install (creates .venv, installs full framework from PyPI)
./scripts/install.sh
# Or manually
pip install "abstractframework==0.1.2"# Configure your setup interactively
abstractcore --config
# Check readiness + download missing models
abstractcore --installNow that you have something running:
- Architecture — Understand how the pieces fit together
- Configuration — All the environment variables and settings
- FAQ — Common questions and troubleshooting
- Scenarios — End-to-end paths by use case
- Guides — Focused "how it works" notes
- Glossary — Shared terminology
Each package also has detailed documentation:
- Every repo has
docs/getting-started.md,docs/architecture.md, and more - Check the repo README for the quickest overview