Technology

Building a Human Handoff Interface for AI-Powered Insurance Agent Using Parlant and Streamlit

Building a Human Handoff Interface for AI-Powered Insurance Agent Using Parlant and Streamlit

Estimated reading time: 9 minutes

  • The article outlines how to implement a human handoff system for AI-powered insurance agents, seamlessly blending AI efficiency with human expertise.
  • It leverages Parlant for building sophisticated conversational AI agents and Streamlit for creating an interactive, real-time interface for human operators.
  • Key components include defining AI tools (such as initiate_human_handoff to switch to manual mode), structuring conversational journeys, and setting global guidelines within Parlant.
  • The Streamlit interface enables human operators (Tier 2) to monitor live customer conversations, view chat history, and respond directly—either as a human agent or on behalf of the AI.
  • This hybrid approach ensures complex or sensitive customer inquiries are handled with nuanced understanding, ultimately enhancing customer satisfaction and operational efficiency.

In the rapidly evolving landscape of customer service, artificial intelligence (AI) has emerged as a transformative force, automating routine tasks and providing instant support. However, even the most sophisticated AI agents inevitably encounter scenarios that require the nuanced understanding, empathy, and problem-solving skills of a human. This is where the concept of human handoff becomes indispensable, creating a harmonious blend of AI efficiency and human expertise.

This article delves into implementing a robust human handoff system for an AI-powered insurance agent. We will leverage Parlant, a powerful framework for building conversational AI, alongside Streamlit, a popular library for creating interactive web applications. Together, these tools enable a seamless transition from automated assistance to direct human intervention, ensuring that complex or sensitive customer inquiries are handled with the utmost care and professionalism.

Human handoff is a key component of customer service automation—it ensures that when AI reaches its limits, a skilled human can seamlessly take over. In this tutorial, we’ll implement a human handoff system for an AI-powered insurance agent using Parlant. You’ll learn how to create a Streamlit-based interface that allows a human operator (Tier 2) to view live customer messages and respond directly within the same session, bridging the gap between automation and human expertise. Check out the FULL CODES here.

Seamless Integration: Setting Up Your Development Environment

Before diving into the core logic of our AI agent and human handoff interface, it’s crucial to set up a secure and functional development environment. This involves securing your API keys and installing the necessary libraries.

First, ensure you have a valid OpenAI API key. This key will allow your AI agent to interact with OpenAI’s models for generating responses and understanding complex queries. Once obtained, create a .env file in your project’s root directory. This method of storing credentials keeps your API key secure and prevents it from being hardcoded directly into your codebase, which is a vital security practice.

OPENAI_API_KEY=your_api_key_here

Next, install the required Python packages. Parlant will be used for building the conversational AI, python-dotenv for loading environment variables, and Streamlit for developing the interactive human handoff interface. Open your terminal or command prompt and run the following command:

pip install parlant dotenv streamlit

Actionable Step 1: Prepare Your Environment and Dependencies

  1. Obtain your OpenAI API key from the OpenAI dashboard.
  2. Create a .env file in your project’s root and store your API key securely.
  3. Install Parlant, python-dotenv, and Streamlit using pip install parlant dotenv streamlit.

Architecting the Intelligent Agent with Parlant

The heart of our system is the AI agent, defined within the agent.py script. This script outlines the AI’s behavior, its ability to interact with specific tools, manage conversations through defined journeys, and, most importantly, initiate a handoff to a human operator when needed.

We begin by loading essential libraries for asynchronous operations, environment variables, and Parlant’s SDK:

import asyncio
import os
from datetime import datetime
from dotenv import load_dotenv
import parlant.sdk as p load_dotenv()

Defining the Agent’s Tools

Tools are functions that allow the AI agent to perform specific actions, such as retrieving data or updating records. For our insurance agent, we’ve defined three core tools that mimic real-world interactions:

@p.tool
async def get_open_claims(context: p.ToolContext) -> p.ToolResult: return p.ToolResult(data=["Claim #123 - Pending", "Claim #456 - Approved"]) @p.tool
async def file_claim(context: p.ToolContext, claim_details: str) -> p.ToolResult: return p.ToolResult(data=f"New claim filed: {claim_details}") @p.tool
async def get_policy_details(context: p.ToolContext) -> p.ToolResult: return p.ToolResult(data={ "policy_number": "POL-7788", "coverage": "Covers accidental damage and theft up to $50,000" })

These tools enable the agent to fetch open claims, file new ones based on customer input, and provide detailed policy information. They form the practical backbone of the AI’s ability to assist customers effectively. Check out the FULL CODES here.

Crucially, the initiate_human_handoff tool is the gateway to human intervention:

@p.tool
async def initiate_human_handoff(context: p.ToolContext, reason: str) -> p.ToolResult: """ Initiate handoff to a human agent when the AI cannot adequately help the customer. """ print(f" Initiating human handoff: {reason}") # Setting session to manual mode stops automatic AI responses return p.ToolResult( data=f"Human handoff initiated because: {reason}", control={ "mode": "manual" # Switch session to manual mode } )

When this tool is triggered, it sends a signal to Parlant to switch the conversation session into “manual” mode. This crucial step pauses the AI’s automated responses, allowing a human operator to take over without interruption.

Defining the Glossary and Journeys

A glossary ensures the AI uses consistent terminology and provides predefined answers for common queries. For instance, our agent can provide standard responses for “Customer Service Number” or “Operating Hours.”

async def add_domain_glossary(agent: p.Agent): await agent.create_term( name="Customer Service Number", description="You can reach us at +1-555-INSURE", ) await agent.create_term( name="Operating Hours", description="We are available Mon-Fri, 9AM-6PM", )

Journeys, on the other hand, define multi-turn conversational flows, guiding customers through specific processes like filing a claim or understanding policy details. These structured flows ensure a coherent and efficient customer experience.

# ---------------------------
# Claim Journey
# --------------------------- async def create_claim_journey(agent: p.Agent) -> p.Journey: journey = await agent.create_journey( title="File an Insurance Claim", description="Helps customers report and submit a new claim.", conditions=["The customer wants to file a claim"], ) s0 = await journey.initial_state.transition_to(chat_state="Ask for accident details") s1 = await s0.target.transition_to(tool_state=file_claim, condition="Customer provides details") s2 = await s1.target.transition_to(chat_state="Confirm claim was submitted", condition="Claim successfully created") await s2.target.transition_to(state=p.END_JOURNEY, condition="Customer confirms submission") return journey # ---------------------------
# Policy Journey
# --------------------------- async def create_policy_journey(agent: p.Agent) -> p.Journey: journey = await agent.create_journey( title="Explain Policy Coverage", description="Retrieves and explains customer's insurance coverage.", conditions=["The customer asks about their policy"], ) s0 = await journey.initial_state.transition_to(tool_state=get_policy_details) await s0.target.transition_to( chat_state="Explain the policy coverage clearly", condition="Policy info is available", ) await agent.create_guideline( condition="Customer presses for legal interpretation of coverage", action="Politely explain that legal advice cannot be provided", ) return journey

These journeys automate common scenarios, while a crucial guideline prevents the AI from offering legal advice, ensuring compliance. Check out the FULL CODES here.

Defining the Main Runner and Handoff Guideline

The main() function orchestrates the entire agent, integrating tools, glossary, and journeys. Crucially, it defines a global guideline for human handoff:

async def main(): async with p.Server() as server: agent = await server.create_agent( name="Insurance Support Agent", description=( "Friendly Tier-1 AI assistant that helps with claims and policy questions. " "Escalates complex or unresolved issues to human agents (Tier-2)." ), ) # Add shared terms & definitions await add_domain_glossary(agent) # Journeys claim_journey = await create_claim_journey(agent) policy_journey = await create_policy_journey(agent) # Disambiguation rule status_obs = await agent.create_observation( "Customer mentions an issue but doesn't specify if it's a claim or policy" ) await status_obs.disambiguate([claim_journey, policy_journey]) # Global Guidelines await agent.create_guideline( condition="Customer asks about unrelated topics", action="Kindly redirect them to insurance-related support only", ) # Human Handoff Guideline await agent.create_guideline( condition="Customer requests human assistance or AI is uncertain about the next step", action="Initiate human handoff and notify Tier-2 support.", tools=[initiate_human_handoff], ) print(" Insurance Support Agent with Human Handoff is ready! Open the Parlant UI to chat.") if __name__ == "__main__": asyncio.run(main())

The “Human Handoff Guideline” is a critical component, detecting situations where AI assistance falls short (e.g., explicit customer request for human help, or AI uncertainty). When this condition is met, the agent is instructed to use the initiate_human_handoff tool, triggering the transition to manual mode.

Actionable Step 2: Develop Your Parlant-Powered AI Agent

  1. Define asynchronous tools for core insurance operations (e.g., get_open_claims, file_claim, get_policy_details).
  2. Implement the initiate_human_handoff tool with control={"mode": "manual"} to gracefully pause AI responses.
  3. Structure conversational flows with Parlant journeys (e.g., “File an Insurance Claim,” “Explain Policy Coverage”) and populate a glossary for consistent responses.
  4. In the main function, configure the agent with all tools, journeys, and, critically, a global guideline that triggers the human handoff tool under specific conditions.

To run your agent, simply execute:

python agent.py

This will start the Parlant agent locally, ready to handle conversations and manage session states. Check out the FULL CODES here.

Bridging AI and Human: Building the Streamlit Handoff Interface

With our intelligent agent running, the next step is to create an intuitive interface for human operators to monitor and interact with ongoing sessions. This is where handoff.py comes in, utilizing Streamlit to build a real-time chat interface.

Importing Libraries and Setting Up the Parlant Client

We import asyncio for asynchronous operations, streamlit for the UI, and AsyncParlantClient to connect to our running Parlant agent.

import asyncio
import streamlit as st
from datetime import datetime
from parlant.client import AsyncParlantClient client = AsyncParlantClient(base_url="http://localhost:8800")

This client establishes a connection to the Parlant server, allowing the Streamlit application to fetch session events and send messages. Check out the FULL CODES here.

Session State Management and Message Rendering

Streamlit’s st.session_state is essential for persisting data across user interactions, ensuring that chat history and event offsets are maintained. The render_message function provides a clear visual distinction between messages from the customer, AI, or human agent, improving readability for operators.

if "events" not in st.session_state: st.session_state.events = []
if "last_offset" not in st.session_state: st.session_state.last_offset = 0 def render_message(message, source, participant_name, timestamp): if source == "customer": st.markdown(f"** Customer [{timestamp}]:** {message}") elif source == "ai_agent": st.markdown(f"** AI [{timestamp}]:** {message}") elif source == "human_agent": st.markdown(f"** {participant_name} [{timestamp}]:** {message}") elif source == "human_agent_on_behalf_of_ai_agent": st.markdown(f"** (Human as AI) [{timestamp}]:** {message}")

Fetching and Sending Messages

The fetch_events asynchronous function continuously polls the Parlant server for new messages (events) within a specified session. This ensures the human operator sees the conversation unfold in near real-time.

async def fetch_events(session_id): try: events = await client.sessions.list_events( session_id=session_id, kinds="message", min_offset=st.session_state.last_offset, wait_for_data=5 ) for event in events: message = event.data.get("message") source = event.source participant_name = event.data.get("participant", {}).get("display_name", "Unknown") timestamp = getattr(event, "created", None) or event.data.get("created", "Unknown Time") event_id = getattr(event, "id", "Unknown ID") st.session_state.events.append( (message, source, participant_name, timestamp, event_id) ) st.session_state.last_offset = max(st.session_state.last_offset, event.offset + 1) except Exception as e: st.error(f"Error fetching events: {e}")

To allow human operators to respond, two functions are defined: send_human_message, which marks the message as coming directly from a human agent, and send_message_as_ai, which allows the human to send a message on behalf of the AI. This flexibility is crucial for maintaining a consistent customer experience even during a handoff.

async def send_human_message(session_id: str, message: str, operator_name: str = "Tier-2 Operator"): event = await client.sessions.create_event( session_id=session_id, kind="message", source="human_agent", message=message, participant={ "id": "operator-001", "display_name": operator_name } ) return event async def send_message_as_ai(session_id: str, message: str): event = await client.sessions.create_event( session_id=session_id, kind="message", source="human_agent_on_behalf_of_ai_agent", message=message ) return event

Streamlit Interface Layout

The final part of handoff.py constructs the interactive Streamlit UI. It features an input field for the Parlant Session ID, a dynamically updated chat history display, and distinct buttons for sending messages as either a human operator or on behalf of the AI.

st.title(" Human Handoff Assistant") session_id = st.text_input("Enter Parlant Session ID:") if session_id: st.subheader("Chat History") if st.button("Refresh Messages"): asyncio.run(fetch_events(session_id)) for msg, source, participant_name, timestamp, event_id in st.session_state.events: render_message(msg, source, participant_name, timestamp) st.subheader("Send a Message") operator_msg = st.text_input("Type your message:") if st.button("Send as Human"): if operator_msg.strip(): asyncio.run(send_human_message(session_id, operator_msg)) st.success("Message sent as human agent ") asyncio.run(fetch_events(session_id)) if st.button("Send as AI"): if operator_msg.strip(): asyncio.run(send_message_as_ai(session_id, operator_msg)) st.success("Message sent as AI ") asyncio.run(fetch_events(session_id))

Actionable Step 3: Build and Connect the Streamlit Handoff UI

  1. Initialize an AsyncParlantClient to connect to your running Parlant agent.
  2. Implement Streamlit’s session_state for persistent data and a clear render_message function for visual clarity.
  3. Create an asynchronous function (fetch_events) to retrieve messages in real-time and helper functions (send_human_message, send_message_as_ai) for sending responses.
  4. Construct the Streamlit UI with an input for the session ID, a dynamically updated chat history, and buttons to send messages as a human or on behalf of the AI.

Real-World Example: A Day in the Life of a Hybrid Insurance Agent

Imagine a customer, Alice, interacting with our AI insurance agent. Alice asks, “Does my premium increase if I install a security system, and what specific systems qualify for a discount?” While the AI can retrieve general policy details, the nuance of specific system qualifications and their impact on premiums might require a more detailed, human-level assessment. Detecting this complexity or even an explicit request for a human, the AI agent’s “Human Handoff Guideline” triggers the initiate_human_handoff tool, switching the session to manual mode.

The Tier-2 operator, monitoring the Streamlit handoff interface, sees Alice’s conversation instantly appear. They review the chat history, understanding the context. The operator, leveraging their expertise, checks internal databases for qualifying security systems and discount policies. They then type their response into the Streamlit interface and choose to “Send as Human,” providing Alice with a precise, informed answer and potentially offering to help apply for the discount. This swift, informed intervention ensures Alice receives the best possible service, bridging the gap where AI reaches its limits.

Conclusion: The Future of Customer Service – A Synergistic Approach

The integration of AI and human expertise represents the pinnacle of modern customer service. By building a human handoff interface with Parlant and Streamlit, we empower AI agents to handle routine inquiries efficiently while providing a robust mechanism for human operators to intervene seamlessly when complex or sensitive situations arise. This hybrid approach enhances customer satisfaction, improves operational efficiency, and ensures that every customer interaction is handled with the appropriate level of care and intelligence. The future of customer service isn’t about replacing humans with AI; it’s about augmenting human capabilities with powerful AI, creating a synergistic and superior experience.

Ready to empower your AI agents with seamless human collaboration? Explore the full potential of Parlant and Streamlit today!

Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

FAQ: Frequently Asked Questions

What is human handoff in the context of AI agents?

Human handoff refers to the process where an AI agent seamlessly transfers a customer interaction to a human operator when the AI encounters complex, sensitive, or ambiguous queries that require nuanced human understanding, empathy, or problem-solving skills.

What roles do Parlant and Streamlit play in building this system?

Parlant is used as the foundational framework for building the conversational AI agent, defining its tools, conversational journeys, and rules for interaction. Streamlit is utilized to create the interactive web-based interface for human operators, allowing them to monitor AI sessions in real-time and intervene directly.

How does the AI agent initiate a human handoff?

The AI agent initiates a human handoff through a dedicated tool, initiate_human_handoff, which is triggered by a global guideline in Parlant. This guideline can detect explicit customer requests for human assistance or situations where the AI is uncertain about the next step. Triggering this tool switches the conversation session into “manual” mode, pausing automated AI responses.

What capabilities does the Streamlit interface provide to human operators?

The Streamlit interface allows human operators to view live chat history, including messages from both the customer and the AI. It provides functionality to send messages either directly as a human agent or on behalf of the AI, ensuring a flexible and consistent customer experience during a handoff.

Why is a hybrid AI-human approach superior in customer service?

A hybrid approach combines the efficiency of AI in handling routine inquiries with the irreplaceable empathy and problem-solving abilities of humans for complex cases. This synergistic model enhances overall customer satisfaction, improves operational efficiency, and ensures that every customer interaction receives the appropriate level of intelligent and compassionate support.

Related Articles

Back to top button