Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -129,8 +129,7 @@
"v1/usage/advanced-configuration",
"v1/usage/tracking-llm-calls",
"v1/usage/tracking-agents",
"v1/usage/recording-operations",
"v1/usage/multiple-sessions"
"v1/usage/recording-operations"
],
"version": "v1"
},
Expand Down
53 changes: 25 additions & 28 deletions docs/v1/concepts/core-concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,52 +39,49 @@ AgentOps can exist in one of two states:
- • Only one session exists at a time. All agent usage is synchronous.
- • Use cases: Scripting, development, local machine use (browser extensions, web client, etc)
</Card>
<Card title="Multi-Session" icon="hand-peace" iconType="solid" color="#2bd600">
<Card title="Concurrent Traces" icon="hand-peace" iconType="solid" color="#2bd600">
- • REST server
- • Asynchronous agents
- • Multiple concurrent workflows
</Card>
</CardGroup>

By default, AgentOps operates in single-session mode. All of the [base SDK functions](/v1/usage/sdk-reference) work as expected.
AgentOps supports both single and multiple concurrent traces (sessions) seamlessly. You can use either the modern trace-based API or the legacy session functions for backwards compatibility.

As soon as you create a second session, AgentOps enters **Multi-Session Mode**. As long as more than one session is active, the [base SDK functions](/v1/usage/sdk-reference) will no longer work.

If multiple sessions exist, you are expected to call the function on the relevant session. Ex
The modern approach uses `start_trace()` and `end_trace()` with automatic instrumentation, while legacy session functions remain available. Multiple concurrent traces work without any special mode switching or restrictions.
<CodeGroup>
```python single session
```python single trace
import agentops
from agentops.sdk.decorators import session
agentops.init()
trace_context = agentops.start_trace("my_workflow")
# Your agent logic here
agentops.end_trace(trace_context, "Success")
```

@session
def my_session():
# Your session code here
pass
```python concurrent traces
import agentops
agentops.init(auto_start_session=False)
trace_1 = agentops.start_trace("workflow_1")
trace_2 = agentops.start_trace("workflow_2")

# Run the session
my_session()
# Work with both traces concurrently
agentops.end_trace(trace_1, "Success")
agentops.end_trace(trace_2, "Success")
```

```python multi-session
```python using decorators
import agentops
from agentops.sdk.decorators import session

@session
def session_1():
# Session 1 code
pass

@session
def session_2():
# Session 2 code
@agentops.trace
def my_workflow():
# Your agent logic here
pass

# Run both sessions
session_1()
session_2()
my_workflow()
```
</CodeGroup>

For more documentation on using multiple concurrent sessions, please see [Multiple Sessions](v1/usage/multiple-sessions) and [FastAPI Example](/v1/examples/fastapi).
For more documentation on using multiple concurrent traces, please see [Concurrent Traces](/v1/usage/multiple-sessions) and [FastAPI Example](/v1/examples/fastapi).

### LLMs, Tools, and Operations (Spans)

Expand Down Expand Up @@ -146,4 +143,4 @@ Optionally, agents may also have:
*Details coming soon.*

<script type="module" src="/scripts/github_stars.js"></script>
<script type="module" src="/scripts/adjust_api_dynamically.js"></script>
<script type="module" src="/scripts/adjust_api_dynamically.js"></script>
6 changes: 3 additions & 3 deletions docs/v1/examples/examples.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ mode: "wide"
Jupyter Notebook with a simple multi-agent design
</Card>

<Card title="Multi Session" icon="computer" href="/v1/examples/multi_session">
Manage multiple sessions at the same time
<Card title="Concurrent Traces" icon="computer" href="/v1/examples/multi_session">
Manage multiple concurrent traces and sessions
</Card>

<Card title="OpenAI Assistants" icon={<img src="https://www.github.com/agentops-ai/agentops/blob/main/docs/images/external/openai/openai-logomark.png?raw=true" alt="OpenAI Assistants" />} iconType="image" href="/v1/integrations/openai" href="/v1/examples/openai_assistants">
Expand Down Expand Up @@ -197,4 +197,4 @@ mode: "wide"

<script type="module" src="/scripts/button_heartbeat_animation.js" />

<script type="module" src="/scripts/adjust_api_dynamically.js" />
<script type="module" src="/scripts/adjust_api_dynamically.js" />
120 changes: 77 additions & 43 deletions docs/v1/examples/multi_session.mdx
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
---
title: 'Multi-Session Example'
description: 'Handling multiple sessions at the same time'
title: 'Concurrent Traces Example'
description: 'Managing multiple concurrent traces and sessions'
mode: "wide"
---

_View Notebook on <a href={'https://github.com/AgentOps-AI/agentops/blob/main/examples/multi_session_llm.ipynb'} target={'_blank'}>Github</a>_

# Multiple Concurrent Sessions
This example will show you how to run multiple sessions concurrently, assigning LLM calls to a specific session.
# Multiple Concurrent Traces
This example demonstrates how to run multiple traces (sessions) concurrently using both the modern trace-based API and the legacy session API for backwards compatibility.

First let's install the required packages:

Expand All @@ -22,7 +22,6 @@ Then import them:
```python
from openai import OpenAI
import agentops
from agentops import ActionEvent
import os
from dotenv import load_dotenv
```
Expand All @@ -41,79 +40,114 @@ OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") or "<your_openai_key>"
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY") or "<your_agentops_key>"
```

Then, of course, lets init AgentOps. We're going to bypass creating a session automatically for the sake of showing it below.
Initialize AgentOps. We'll disable auto-start to manually create our traces:

```python
agentops.init(AGENTOPS_API_KEY, auto_start_session=False)
openai = OpenAI()
client = OpenAI()
```

Now lets create two sessions, each with an identifiable tag.
## Modern Trace-Based Approach

The recommended approach uses `start_trace()` and `end_trace()`:

```python
session_1 = agentops.start_session(tags=["multi-session-test-1"])
session_2 = agentops.start_session(tags=["multi-session-test-2"])
# Create multiple concurrent traces
trace_1 = agentops.start_trace("user_query_1", tags=["experiment_a"])
trace_2 = agentops.start_trace("user_query_2", tags=["experiment_b"])

print("session_id_1: {}".format(session_1.session_id))
print("session_id_2: {}".format(session_2.session_id))
print(f"Trace 1 ID: {trace_1.span.get_span_context().trace_id}")
print(f"Trace 2 ID: {trace_2.span.get_span_context().trace_id}")
```

## LLM Calls
Now lets go ahead and make our first OpenAI LLM call. The challenge with having multiple sessions at the same time is that there is no way for AgentOps to know what LLM call is intended to pertain to what active session. This means we need to do a little extra work in one of two ways.
## LLM Calls with Automatic Tracking

```python
messages = [{"role": "user", "content": "Hello"}]
```

### Patching Function
This method involves wrapping the LLM call withing a function on session. It can look a little counter-intuitive, but it easily tells us what session the call belongs to.
With the modern implementation, LLM calls are automatically tracked without needing special session assignment:

```python
# option 1: use session.patch
response = session_1.patch(openai.chat.completions.create)(
# LLM calls are automatically tracked and associated with the current context
messages_1 = [{"role": "user", "content": "Hello from trace 1"}]
response_1 = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
messages=messages_1,
temperature=0.5,
)
```

### Create patched function
If you're using the create function multiple times, you can create a new function with the same method

```python
observed_create = session_1.patch(openai.chat.completions.create)
obs_response = observed_create(
messages_2 = [{"role": "user", "content": "Hello from trace 2"}]
response_2 = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
messages=messages_2,
temperature=0.5,
)
```

### Keyword Argument
Alternatively, you can also pass the session into the LLM function call as a keyword argument. While this method works and is a bit more readable, it is not a "pythonic" pattern and can lead to linting errors in the code, as the base function is not expecting a `session` keyword.
## Using Context Managers

You can also use traces as context managers for automatic cleanup:

```python
# option 2: add session as a keyword argument
response2 = openai.chat.completions.create(
model="gpt-3.5-turbo", messages=messages, temperature=0.5, session=session_2
)
with agentops.start_trace("context_managed_trace") as trace:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello from context manager"}],
temperature=0.5,
)
# Trace automatically ends when exiting the context
```

## Recording Events
Outside of LLM calls, there are plenty of other events that we want to track. You can learn more about these events [here](https://docs.agentops.ai/v1/concepts/events).
## Using Decorators

Recording these events on a session is as simple as `session.record(...)`
For even cleaner code, use decorators:

```python
session_1.record(ActionEvent(action_type="test event"))
@agentops.trace
def process_user_query(query: str):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": query}],
temperature=0.5,
)
return response.choices[0].message.content

# Each function call creates its own trace
result_1 = process_user_query("What is the weather like?")
result_2 = process_user_query("Tell me a joke")
```

Now let's go ahead and end the sessions
## Legacy Session API (Backwards Compatibility)

For backwards compatibility, the legacy session API is still available:

```python
# Legacy approach - still works but not recommended for new code
session_1 = agentops.start_session(tags=["legacy-session-1"])
session_2 = agentops.start_session(tags=["legacy-session-2"])

# Legacy sessions work the same way as before
session_1.end_session(end_state="Success")
session_2.end_session(end_state="Success")
```

If you look in the AgentOps dashboard for these sessions, you will see two unique sessions, both with one LLM Event each, one with an Action Event as well.
## Ending Traces

End traces individually or all at once:

```python
# End specific traces
agentops.end_trace(trace_1, "Success")
agentops.end_trace(trace_2, "Success")

# Or end all active traces at once
# agentops.end_trace(end_state="Success")
```

## Key Differences from Legacy Multi-Session Mode

1. **No mode switching**: You can create multiple traces without entering a special "multi-session mode"
2. **Automatic LLM tracking**: LLM calls are automatically associated with the current execution context
3. **No exceptions**: No `MultiSessionException` or similar restrictions
4. **Cleaner API**: Use decorators and context managers for better code organization
5. **Backwards compatibility**: Legacy session functions still work for existing code

If you look in the AgentOps dashboard, you will see multiple unique traces, each with their respective LLM calls and events properly tracked.

Loading