Skip to content

Latest commit

 

History

History
866 lines (582 loc) · 23.3 KB

File metadata and controls

866 lines (582 loc) · 23.3 KB

Getting Started

Welcome! This guide gets you from zero to a working AbstractFramework setup in minutes.

AbstractFramework is modular — you can install the full framework in one command or pick specific packages. Whether you want a simple LLM API, a durable coding assistant, a visual workflow editor, voice-enabled agents, or a full production gateway — there's a path for you.

New here? Start with Path 0 to install everything, then jump to the path that matches what you want to build.

What Do You Want to Build?

Your Goal Start Here What You'll Use
Install the full, pinned framework release Path 0 abstractframework==0.1.2
Call LLMs with a unified API Path 1 abstractcore
Build a local coding assistant Path 2 abstractcode
Create durable workflows Path 3 abstractruntime
Deploy a remote run gateway Path 4 abstractgateway + abstractobserver
Use agent patterns (ReAct, etc.) Path 5 abstractagent
Add voice/audio to AbstractCore Path 6 abstractcore + abstractvoice (plugin)
Add image generation to AbstractCore Path 7 abstractcore + abstractvision (plugin)
Add music generation to AbstractCore Path 7a abstractcore + abstractmusic (plugin)
Build a knowledge graph Path 8 abstractmemory + abstractsemantics
macOS menu bar assistant Path 9 abstractassistant
Visual workflow editor (browser) Path 10 @abstractframework/flow
Browser-based coding assistant Path 11 @abstractframework/code
Create a specialized agent Path 12 abstractflow + clients
Integrate external tools via MCP Path 13 abstractcore
Use structured output (Pydantic) Path 14 abstractcore
Run an OpenAI-compatible API server Path 15 abstractcore[server]

Path 0: Full Framework (Recommended)

Install the pinned global release profile in one command:

pip install "abstractframework==0.1.2"

This installs all framework Python packages together, including:

Package Version
abstractcore 2.12.0
abstractruntime 0.4.2
abstractagent 0.3.1
abstractflow 0.3.7 (editor)
abstractcode 0.3.6
abstractgateway 0.1.0
abstractmemory 0.0.2
abstractsemantics 0.0.2
abstractvoice 0.6.3
abstractvision 0.2.1
abstractassistant 0.4.2

abstractcore is installed with openai,anthropic,huggingface,embeddings,tokens,tools,media,compression,server.

Use this path when you want a fully functional setup with minimal decision overhead.

After install, configure and verify your setup:

# Interactive guided setup (model, base URL, vision, API keys, audio, video, embeddings, logging)
abstractcore --config

# Check readiness + download missing models
abstractcore --install

Prerequisites

Python: 3.10 or newer

Node.js: 18+ (only for browser UIs)

An LLM Backend (pick one):

  • Local (recommended): Ollama, LM Studio, vLLM, llama.cpp, LocalAI
  • Cloud: OpenAI, Anthropic, Google, Groq, Together AI, Mistral

Path 1: LLM Integration

The simplest path. Use AbstractCore as a unified LLM client.

Install

pip install abstractcore

Configure a Provider

Local with Ollama (free, no API key):

ollama serve
ollama pull qwen3:4b-instruct
export OLLAMA_HOST="http://localhost:11434"

Or with LM Studio (OpenAI-compatible):

export OPENAI_BASE_URL="http://127.0.0.1:1234/v1"
export OPENAI_API_KEY="local"

Or with Cloud APIs:

export OPENAI_API_KEY="sk-..."
# or
export ANTHROPIC_API_KEY="sk-ant-..."

Use It

from abstractcore import create_llm

# Local
llm = create_llm("ollama", model="qwen3:4b-instruct")

# Or cloud
# llm = create_llm("openai", model="gpt-4o")
# llm = create_llm("anthropic", model="claude-3-5-sonnet-latest")

response = llm.generate("What is durable execution?")
print(response.content)

Next: Add tool calling or structured output.


Path 2: Terminal Agent

Get a durable coding assistant running in your terminal.

Install

pip install abstractcode

Start Ollama

ollama serve
ollama pull qwen3:1.7b-q4_K_M
export OLLAMA_HOST="http://localhost:11434"

Run

abstractcode --provider ollama --model qwen3:1.7b-q4_K_M

Inside AbstractCode

  • Type /help for all commands
  • Mention files with @path/to/file in your prompts
  • Tool execution requires approval by default (toggle with /auto-accept)

Durability Note: Sessions persist across restarts — close and reopen, your full context is preserved (conversation history, tool calls, state). To start fresh: type /clear

Next: See AbstractCode docs.


Path 3: Durable Workflows

Build workflows that survive crashes and can pause/resume.

Install

pip install abstractruntime
# Add LLM integration:
pip install "abstractruntime[abstractcore]"

Key Concepts

  • Run: A durable workflow instance
  • Ledger: Append-only log of everything that happened
  • Effect: A request for something to happen (LLM call, tool call, timer)
  • Wait: An explicit pause point (state is checkpointed)

Example

from abstractruntime import (
    Effect, EffectType, Runtime, StepPlan, WorkflowSpec,
    InMemoryLedgerStore, InMemoryRunStore
)

# Define workflow nodes
def ask(run, ctx):
    return StepPlan(
        node_id="ask",
        effect=Effect(
            type=EffectType.ASK_USER,
            payload={"prompt": "What would you like to do?"},
            result_key="user_input",
        ),
        next_node="done",
    )

def done(run, ctx):
    return StepPlan(node_id="done", complete_output={"answer": run.vars.get("user_input")})

# Create workflow and runtime
wf = WorkflowSpec(workflow_id="demo", entry_node="ask", nodes={"ask": ask, "done": done})
rt = Runtime(run_store=InMemoryRunStore(), ledger_store=InMemoryLedgerStore())

# Start and tick
run_id = rt.start(workflow=wf)
state = rt.tick(workflow=wf, run_id=run_id)
print(state.status.value)  # "waiting"

# Resume with user input
state = rt.resume(workflow=wf, run_id=run_id, wait_key=state.waiting.wait_key, payload={"text": "Hello!"})
print(state.status.value)  # "completed"

Next: See AbstractRuntime docs.


Path 4: Gateway + Observer

Deploy a remote control plane and observe runs in your browser.

Install

pip install "abstractgateway"
# If your workflows use LLM/tools:
pip install "abstractruntime[abstractcore]>=0.4.0"

Configure

# Required: authentication token
export ABSTRACTGATEWAY_AUTH_TOKEN="$(python -c 'import secrets; print(secrets.token_urlsafe(32))')"

# Required: CORS for browser access
export ABSTRACTGATEWAY_ALLOWED_ORIGINS="http://localhost:*,http://127.0.0.1:*"

# Workflow source
export ABSTRACTGATEWAY_WORKFLOW_SOURCE=bundle
export ABSTRACTGATEWAY_FLOWS_DIR="/path/to/your/bundles"
export ABSTRACTGATEWAY_DATA_DIR="$PWD/runtime/gateway"

Start the Gateway

abstractgateway serve --host 127.0.0.1 --port 8080

Verify it's running:

curl -sS "http://127.0.0.1:8080/api/health"

Start the Observer

In another terminal:

npx @abstractframework/observer

Open http://localhost:3001 in your browser:

  1. Set Gateway URL to http://127.0.0.1:8080
  2. Paste your Auth Token
  3. Click Connect

You're now observing your runs.

Next: See AbstractGateway docs.


Path 5: Agent Patterns

Use ready-made agent loops (ReAct, CodeAct, MemAct).

Install

pip install abstractagent

Example: ReAct Agent

from abstractagent import create_react_agent

agent = create_react_agent(provider="ollama", model="qwen3:4b-instruct")
agent.start("List the files in the current directory")
state = agent.run_to_completion()
print(state.output["answer"])

Next: See AbstractAgent docs.


Path 6: Voice I/O

Add speech-to-text and text-to-speech capabilities to AbstractCore.

Note: AbstractVoice is a capability plugin for AbstractCore. Once installed, it exposes llm.voice (TTS) and llm.audio (STT) on any LLM instance, keeping AbstractCore lightweight by default.

Install

pip install abstractcore abstractvoice

Prefetch Models (Recommended)

AbstractVoice is offline-first — prefetch models explicitly:

abstractvoice-prefetch --stt small
abstractvoice-prefetch --piper en

Use with AbstractCore (Recommended)

from abstractcore import create_llm

llm = create_llm("ollama", model="qwen3:4b-instruct")

# Check available capabilities
print(llm.capabilities.status())

# Text-to-speech via capability
wav_bytes = llm.voice.tts("Hello from AbstractCore!", format="wav")

# Speech-to-text via capability
text = llm.audio.transcribe("audio.wav", language="en")
print(text)

# Audio in LLM requests (transcribed automatically)
response = llm.generate(
    "Summarize the key points from this call.",
    media=["meeting.wav"],
    audio_policy="speech_to_text",
)
print(response.content)

Standalone Use

You can also use AbstractVoice directly without AbstractCore:

from abstractvoice import VoiceManager

vm = VoiceManager()

# Text-to-speech
vm.speak("Hello! This is AbstractVoice.")

# Speech-to-text (from file)
text = vm.transcribe_file("audio.wav")
print(text)

Interactive REPL

abstractvoice --verbose

Next: See AbstractVoice docs and AbstractCore Audio & Voice.


Path 7: Image Generation

Add text-to-image and image-to-image capabilities to AbstractCore.

Note: AbstractVision is a capability plugin for AbstractCore. Once installed, it exposes llm.vision for generative image tasks, keeping AbstractCore lightweight by default.

Supported Backends

  • HuggingFace (recommended) — Local diffusion models via diffusers
  • OpenAI-compatible APIs — Any server exposing /v1/images/generations

Note: Ollama and LM Studio do not currently support image generation models. Use HuggingFace for local image generation.

Install

pip install abstractcore abstractvision

Use with AbstractCore (Recommended)

from abstractcore import create_llm

llm = create_llm("openai", model="gpt-4o-mini")

# Check available capabilities
print(llm.capabilities.status())

# Text-to-image via capability (requires HF_TOKEN or vision_base_url config)
# png_bytes = llm.vision.t2i("a red square")

Configure the vision backend (choose one):

# Option 1: HuggingFace (recommended for local generation)
export HF_TOKEN="hf_..."

# Option 2: OpenAI-compatible server
export ABSTRACTVISION_BASE_URL="http://localhost:7860/v1"

Standalone Use with HuggingFace

You can also use AbstractVision directly for local image generation:

from abstractvision import VisionManager, LocalAssetStore
from abstractvision.backends import HuggingFaceBackend, HuggingFaceBackendConfig

# Configure HuggingFace backend (local diffusion models)
backend = HuggingFaceBackend(
    config=HuggingFaceBackendConfig(
        model_id="stabilityai/stable-diffusion-xl-base-1.0",
        # device="mps",  # for Apple Silicon
    )
)

vm = VisionManager(backend=backend, store=LocalAssetStore())

# Generate image
result = vm.generate_image("a watercolor painting of a lighthouse")
print(result)  # {"$artifact": "...", "content_type": "image/png", ...}

CLI

# Using HuggingFace
abstractvision t2i --backend huggingface "a photo of a red fox"

# Using OpenAI-compatible server
abstractvision t2i --base-url http://localhost:7860/v1 "a photo of a red fox"

Next: See AbstractVision docs and AbstractCore Vision Capabilities.


Path 7a: Music Generation

Add text-to-music capabilities to AbstractCore.

Note: AbstractMusic is a capability plugin for AbstractCore. Once installed, it exposes llm.music for deterministic text-to-music generation, keeping AbstractCore lightweight by default.

Install

pip install abstractcore abstractmusic

Configure a local backend (ACE-Step v1.5)

AbstractMusic generates locally in-process. The default backend is ACE-Step v1.5.

If you switch to the Diffusers backend, model licenses vary by checkpoint. Choose a model compatible with your intended usage.

Use with AbstractCore

from abstractcore import create_llm

llm = create_llm(
    # Any provider/model works here. The LLM does *not* generate music audio.
    # Music generation is performed by the configured AbstractMusic backend (ACE-Step by default).
    "ollama",
    model="qwen3:4b-instruct",
    music_backend="acestep",
    music_model_id="ACE-Step/Ace-Step1.5",
)

wav_bytes = llm.music.t2m("uplifting synthwave, 120bpm, catchy chorus", format="wav", duration_s=10.0)
open("out.wav", "wb").write(wav_bytes)

Path 8: Knowledge Graph

Build a temporal, provenance-aware knowledge graph.

Install

pip install abstractmemory
pip install abstractsemantics  # Schema registry

# Optional: persistent storage + vector search
pip install "abstractmemory[lancedb]"

Use It

from abstractmemory import InMemoryTripleStore, TripleAssertion, TripleQuery

store = InMemoryTripleStore()

# Add knowledge
store.add([
    TripleAssertion(
        subject="Paris",
        predicate="is_capital_of",
        object="France",
        scope="session",
        owner_id="sess-1",
    )
])

# Query
hits = store.query(TripleQuery(subject="paris", scope="session", owner_id="sess-1"))
print(hits[0].object)  # "france"

Next: See AbstractMemory docs.


Path 9: macOS Assistant

Get a menu bar AI assistant with optional voice.

Install

pip install abstractassistant
# Or with voice support:
pip install "abstractassistant[full]"

Run

# Tray mode (menu bar)
assistant tray

# Or single command
assistant run --provider ollama --model qwen3:4b-instruct --prompt "Summarize my changes"

Next: See AbstractAssistant docs.


Path 10: Flow Editor

Build and edit visual workflows in your browser.

Run

npx @abstractframework/flow

Open http://localhost:3003 in your browser.

What You Can Do

  • Drag-and-drop workflow nodes (LLM, tools, conditionals, loops)
  • Connect nodes visually
  • Test workflows in real-time
  • Export as .flow bundles for deployment

Next: See AbstractFlow docs.


Path 11: Code Web UI

Run the browser-based coding assistant.

Prerequisites

You need a running AbstractGateway (see Path 4).

Run

npx @abstractframework/code

Open http://localhost:3002 in your browser. Configure the gateway URL in the UI settings, then start coding.

Next: See AbstractCode web docs.


Path 12: Specialized Agent

Create a specialized agent that runs in any client (terminal, browser, custom apps).

Why?

Instead of writing agent logic in code, you:

  1. Author a visual workflow with the Flow Editor
  2. Declare an interface contract (abstractcode.agent.v1)
  3. Run it in any compatible client — no client-specific code needed

Use cases: code reviewers, deep researchers, data analysts, custom assistants.

Step 1: Author in the Flow Editor

npx @abstractframework/flow

Open http://localhost:3003 and create a workflow with:

  • On Flow Start node (outputs: provider, model, prompt)
  • Your agent logic (LLM nodes, tool nodes, conditionals, loops)
  • On Flow End node (inputs: response, success, meta)

Set interfaces: ["abstractcode.agent.v1"] in the workflow properties.

Step 2: Export as a Bundle

In the editor, export your workflow as a .flow bundle.

Step 3: Run Anywhere

Terminal (AbstractCode):

abstractcode --workflow /path/to/my-agent.flow

Install for easy access:

abstractcode workflow install /path/to/my-agent.flow
abstractcode --workflow my-agent

Deploy to Gateway:

Copy your .flow bundle to ABSTRACTGATEWAY_FLOWS_DIR. It will appear in:

  • Observer's workflow picker
  • Code Web UI's workflow picker
  • Gateway's /api/gateway/bundles discovery endpoint

Custom app:

from abstractcode.workflow_agent import WorkflowAgent

agent = WorkflowAgent(flow_ref="/path/to/my-agent.flow")
state = agent.run_to_completion(prompt="Analyze this code...")
print(state.output["response"])

Next: See AbstractFlow docs and Interface contracts.


Path 13: MCP Integration

Discover and use tools from external MCP (Model Context Protocol) servers.

What is MCP?

MCP is an open protocol for connecting LLMs to external tool providers. AbstractCore supports both HTTP and stdio MCP servers, letting you integrate external tool ecosystems without writing adapter code.

Install

pip install abstractcore

Use It

from abstractcore import create_llm

llm = create_llm("openai", model="gpt-4o-mini")

# MCP tools are discovered and presented alongside local tools
response = llm.generate(
    "Search for recent Python releases",
    mcp_servers=[{"url": "http://localhost:3000/mcp"}],
)

MCP tools integrate seamlessly with AbstractRuntime's durable execution: they participate in the same approval boundaries, ledger logging, and replay semantics as any other tool.

Next: See AbstractCore MCP docs.


Path 14: Structured Output

Extract structured data from any LLM using Pydantic models.

Install

pip install abstractcore

Use It

from pydantic import BaseModel
from abstractcore import create_llm

class Analysis(BaseModel):
    title: str
    key_points: list[str]
    confidence: float

llm = create_llm("openai", model="gpt-4o-mini")
result = llm.generate(
    "Analyze the pros and cons of microservices architecture.",
    response_model=Analysis,
)
print(result.title)
print(result.key_points)

AbstractCore uses provider-aware strategies — native JSON mode where available, with automatic retry and fallback for models that need it.

Next: See AbstractCore Structured Output docs.


Path 15: OpenAI-Compatible Server

Turn AbstractCore into a multi-provider OpenAI-compatible API server. Route requests to any backend via model="provider/model".

Install

pip install "abstractcore[server]"

Run

python -m abstractcore.server.app

Use with Any OpenAI Client

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
resp = client.chat.completions.create(
    model="ollama/qwen3:4b-instruct",
    messages=[{"role": "user", "content": "Hello from the gateway!"}],
)
print(resp.choices[0].message.content)

The server supports tool calling, media input, and optionally exposes /v1/images/* (via AbstractVision) and /v1/audio/* (via capability plugins like AbstractVoice; plus /v1/audio/music when AbstractMusic is installed) endpoints.

Next: See AbstractCore Server docs.



Developer Setup (From Source)

If you want to work on AbstractFramework itself (contribute, modify, debug), use the source scripts instead of PyPI install.

Step 1: Clone all repositories

# Clone AbstractFramework + all 13 sibling repos into a single directory
./scripts/clone.sh

# Or into a specific directory
./scripts/clone.sh ~/dev/abstractframework

This clones every package repo as a sibling directory inside the AbstractFramework root. Re-running pulls updates for already-cloned repos.

Step 2: Build from source

# Full build — stay in the .venv afterwards (recommended)
source ./scripts/build.sh

# Or without staying in the venv (you'll need to activate manually)
./scripts/build.sh

# Options (combinable):
source ./scripts/build.sh --python    # Python packages only
source ./scripts/build.sh --npm       # npm UI packages only
source ./scripts/build.sh --clean     # delete .venv first (avoids cross-project pollution)

build.sh installs every Python package in editable mode (pip install -e) from local checkouts — NOT from PyPI. This means your code changes take effect immediately. Third-party dependencies (pydantic, torch, etc.) are resolved from PyPI normally.

Important: build.sh requires clone.sh to have been run first — it expects sibling repo directories to exist.

Tip: Use source (not ./) so your shell stays in the .venv after the build.

Tip: Use --clean if you see dependency conflicts from other projects in your .venv. This deletes the venv and creates a fresh one.

Install from PyPI (alternative)

For end users who just want to install the published release:

# One-liner install (creates .venv, installs full framework from PyPI)
./scripts/install.sh

# Or manually
pip install "abstractframework==0.1.2"

After install (source or PyPI)

# Configure your setup interactively
abstractcore --config

# Check readiness + download missing models
abstractcore --install

What's Next?

Now that you have something running:

  • Architecture — Understand how the pieces fit together
  • Configuration — All the environment variables and settings
  • FAQ — Common questions and troubleshooting
  • Scenarios — End-to-end paths by use case
  • Guides — Focused "how it works" notes
  • Glossary — Shared terminology

Each package also has detailed documentation:

  • Every repo has docs/getting-started.md, docs/architecture.md, and more
  • Check the repo README for the quickest overview