This page is a user-facing map of the public Python API exposed from abstractcore (see abstractcore/__init__.py). For a complete listing of functions/classes (including events), see API Reference.
New to AbstractCore? Start with Getting Started.
Implementation pointers (source of truth):
create_llm:abstractcore/core/factory.py→abstractcore/providers/registry.pyBasicSession:abstractcore/core/session.py- Response/types:
abstractcore/core/types.py - Tool decorator:
abstractcore/tools/core.py
Create a provider instance:
from abstractcore import create_llm
llm = create_llm("openai", model="gpt-4o-mini") # requires: pip install "abstractcore[openai]"
resp = llm.generate("Hello!")
print(resp.content)Provider IDs (common): openai, anthropic, openrouter, portkey, ollama, lmstudio, vllm, openai-compatible, huggingface, mlx.
from abstractcore import create_llm
llm_openrouter = create_llm("openrouter", model="openai/gpt-4o-mini")
llm_portkey = create_llm("portkey", model="gpt-5-mini", api_key="PORTKEY_API_KEY", config_id="pcfg_...")Gateway notes:
- OpenRouter uses
OPENROUTER_API_KEY(model names likeopenai/...). - Portkey uses
PORTKEY_API_KEYplus a config id (PORTKEY_CONFIG). - Optional generation parameters (
temperature,top_p,max_output_tokens, etc.) are only forwarded when explicitly set.
Keep conversation state:
from abstractcore import BasicSession, create_llm
session = BasicSession(create_llm("anthropic", model="claude-haiku-4-5")) # requires: abstractcore[anthropic]
print(session.generate("Give me 3 name ideas.").content)
print(session.generate("Pick the best one.").content)Define tools in Python with a decorator, then pass them to generate() / agenerate():
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
return f"{city}: 22°C and sunny"
llm = create_llm("openai", model="gpt-4o-mini")
resp = llm.generate("Use the tool.", tools=[get_weather])
print(resp.tool_calls)Most calls return a GenerateResponse object (or an iterator of them for streaming). Common fields:
content: cleaned assistant texttool_calls: structured tool calls (pass-through by default)usage: token usage (provider-dependent)metadata: provider/model specific fields (for example extracted reasoning text when configured)
download_model(...) is an async generator that yields DownloadProgress updates while a model is being fetched.
Supported providers:
ollama: pulls via the Ollama HTTP API (/api/pull)huggingface/mlx: downloads from HuggingFace Hub (requirespip install "abstractcore[huggingface]"; passtoken=for gated models)
Example:
import asyncio
from abstractcore import download_model
async def main():
async for p in download_model("ollama", "qwen3:4b-instruct-2507-q4_K_M"):
print(p.status.value, p.message)
asyncio.run(main())Implementation: abstractcore/download.py. For provider setup and base URLs, see Prerequisites.
Tools are passed explicitly to generate() / agenerate():
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
return f"{city}: 22°C and sunny"
llm = create_llm("openai", model="gpt-4o-mini")
resp = llm.generate("Use the tool.", tools=[get_weather])
print(resp.tool_calls)See Tool Calling and Tool Syntax Rewriting.
If you want a ready-made toolset (web + filesystem helpers), install:
pip install "abstractcore[tools]"Then import from abstractcore.tools.common_tools (for example web_search, skim_websearch, skim_url, fetch_url). See Tool Calling for usage patterns and when to use skim_* vs fetch_*.
Pass a Pydantic model via response_model=... to receive a typed result:
from pydantic import BaseModel
from abstractcore import create_llm
class Answer(BaseModel):
title: str
bullets: list[str]
llm = create_llm("openai", model="gpt-4o-mini")
result = llm.generate("Summarize HTTP/3 in 3 bullets.", response_model=Answer)
print(result.bullets)See Structured Output.
Media handling is opt-in:
pip install "abstractcore[media]"Then pass media=[...] to generate() / agenerate() (or use the media pipeline). Media behavior is policy-driven:
- Images: use a vision-capable model, or configure vision fallback (caption → inject short observations).
- Video: controlled by
video_policy(native when supported; otherwise frame sampling viaffmpeg+ vision handling). - Audio: controlled by
audio_policy(native when supported; otherwise optional speech-to-text viaabstractvoice).
See Media Handling, Vision Capabilities, and Centralized Config.
If you want an OpenAI-compatible /v1 gateway, install and run the server:
pip install "abstractcore[server]"
python -m abstractcore.server.appSee Server.