Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,7 @@
"v1/integrations/groq",
"v1/integrations/haystack",
"v1/integrations/langchain",
"v1/integrations/llamaindex",
"v1/integrations/llama_stack",
"v1/integrations/litellm",
"v1/integrations/mistral",
Expand Down Expand Up @@ -174,6 +175,7 @@
"v2/integrations/google_adk",
"v2/integrations/google_generative_ai",
"v2/integrations/langchain",
"v2/integrations/llamaindex",
"v2/integrations/litellm",
"v2/integrations/openai",
"v2/integrations/agents_sdk",
Expand Down
88 changes: 88 additions & 0 deletions docs/v1/integrations/llamaindex.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
---
title: 'LlamaIndex'
description: 'AgentOps works seamlessly with LlamaIndex, a framework for building context-augmented generative AI applications with LLMs.'
---

import CodeTooltip from '/snippets/add-code-tooltip.mdx'
import EnvTooltip from '/snippets/add-env-tooltip.mdx'

[LlamaIndex](https://www.llamaindex.ai/) is a framework for building context-augmented generative AI applications with LLMs. AgentOps provides observability into your LlamaIndex applications through automatic instrumentation.

<Steps>
<Step title="Install the AgentOps SDK">
<CodeGroup>
```bash pip
pip install agentops
```
```bash poetry
poetry add agentops
```
</CodeGroup>
</Step>
<Step title="Install LlamaIndex AgentOps Instrumentation">
<CodeGroup>
```bash pip
pip install llama-index-instrumentation-agentops
```
```bash poetry
poetry add llama-index-instrumentation-agentops
```
</CodeGroup>
</Step>
<Step title="Add 2 lines of code">
<CodeTooltip/>
<CodeGroup>
```python python
from llama_index.core import set_global_handler

# NOTE: Feel free to set your AgentOps environment variables (e.g., 'AGENTOPS_API_KEY')
# as outlined in the AgentOps documentation, or pass the equivalent keyword arguments
# anticipated by AgentOps' AOClient as **eval_params in set_global_handler.

set_global_handler("agentops")
```
</CodeGroup>
<EnvTooltip />
<CodeGroup>
```python .env
AGENTOPS_API_KEY=<YOUR API KEY>
```
</CodeGroup>
Read more about environment variables in [Advanced Configuration](/v1/usage/advanced-configuration)
</Step>
<Step title="Run your LlamaIndex application">
Execute your program and visit [app.agentops.ai/drilldown](https://app.agentops.ai/drilldown) to observe your LlamaIndex application! 🕵️
<Tip>
After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard
</Tip>
<div/>{/* Intentionally blank div for newline */}
<Frame type="glass" caption="Clickable link to session">
<img height="200" src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/link-to-session.gif?raw=true" />
</Frame>
</Step>
</Steps>

## Usage Pattern

Here's a simple example of how to use AgentOps with LlamaIndex:

```python
from llama_index.core import set_global_handler
import llama_index.core

# Set the global handler to AgentOps
set_global_handler("agentops")

# Your LlamaIndex application code here
# AgentOps will automatically track LLM calls and other operations
```

## Additional Resources

For more detailed information about LlamaIndex's observability features and AgentOps integration, check out the [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/module_guides/observability/#agentops).

<script type="module" src="/scripts/github_stars.js"></script>
<script type="module" src="/scripts/scroll-img-fadein-animation.js"></script>
<script type="module" src="/scripts/button_heartbeat_animation.js"></script>
<script type="css" src="/styles/styles.css"></script>
<script type="module" src="/scripts/adjust_api_dynamically.js"></script>
79 changes: 79 additions & 0 deletions docs/v2/integrations/llamaindex.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
---
title: 'LlamaIndex'
description: 'AgentOps works seamlessly with LlamaIndex, a framework for building context-augmented generative AI applications with LLMs.'
---

[LlamaIndex](https://www.llamaindex.ai/) is a framework for building context-augmented generative AI applications with LLMs. AgentOps provides comprehensive observability into your LlamaIndex applications through automatic instrumentation, allowing you to monitor LLM calls, track performance, and analyze your application's behavior.

## Installation

Install AgentOps and the LlamaIndex AgentOps instrumentation package:

<CodeGroup>
```bash pip
pip install agentops llama-index-instrumentation-agentops
```
```bash poetry
poetry add agentops llama-index-instrumentation-agentops
```
```bash uv
uv add agentops llama-index-instrumentation-agentops
```
</CodeGroup>

## Setting Up API Keys

You'll need an AgentOps API key from your [AgentOps Dashboard](https://app.agentops.ai/):

<CodeGroup>
```bash Export to CLI
export AGENTOPS_API_KEY="your_agentops_api_key_here"
```
```txt Set in .env file
AGENTOPS_API_KEY="your_agentops_api_key_here"
```
</CodeGroup>

## Usage

Simply set the global handler to "agentops" at the beginning of your LlamaIndex application. AgentOps will automatically instrument LlamaIndex to track your LLM interactions and application performance.

```python
from llama_index.core import set_global_handler
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

# Set the global handler to AgentOps
# NOTE: Feel free to set your AgentOps environment variables (e.g., 'AGENTOPS_API_KEY')
# as outlined in the AgentOps documentation, or pass the equivalent keyword arguments
# anticipated by AgentOps' AOClient as **eval_params in set_global_handler.
set_global_handler("agentops")

# Your LlamaIndex application code here
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)

# Create a query engine
query_engine = index.as_query_engine()

# Query your data - AgentOps will automatically track this
response = query_engine.query("What is the main topic of these documents?")
print(response)
```

## What Gets Tracked

When you use AgentOps with LlamaIndex, the following operations are automatically tracked:

- **LLM Calls**: All interactions with language models including prompts, completions, and token usage
- **Embeddings**: Vector embedding generation and retrieval operations
- **Query Operations**: Search and retrieval operations on your indexes
- **Performance Metrics**: Response times, token costs, and success/failure rates

## Additional Resources

For more detailed information about LlamaIndex's observability features and AgentOps integration, check out the [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/module_guides/observability/#agentops).

<script type="module" src="/scripts/github_stars.js"></script>
<script type="module" src="/scripts/scroll-img-fadein-animation.js"></script>
<script type="module" src="/scripts/button_heartbeat_animation.js"></script>
<script type="css" src="/styles/styles.css"></script>
35 changes: 35 additions & 0 deletions examples/llamaindex_examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# LlamaIndex AgentOps Integration Example

This example demonstrates how to use AgentOps with LlamaIndex for observability and monitoring of your context-augmented generative AI applications.

## Setup

1. Install required packages:
```bash
pip install agentops llama-index-instrumentation-agentops llama-index python-dotenv
```

2. Set your API keys:
```bash
export AGENTOPS_API_KEY="your_agentops_api_key"
export OPENAI_API_KEY="your_openai_api_key"
```

## Files

- `llamaindex_example.py` - Python script example
- `llamaindex_example.ipynb` - Jupyter notebook example

## Usage

Run the Python script:
```bash
python llamaindex_example.py
```

Or open and run the Jupyter notebook:
```bash
jupyter notebook llamaindex_example.ipynb
```

After running, check your AgentOps dashboard for the recorded session.
Loading
Loading