Advanced Patterns
Multiple until Statements
Based allows for sophisticated conversation flows by supporting multiple until statements. Each until block represents a different condition and can trigger different handling paths.
# Multi-condition conversation handler example
loop:
response = talk(
"Welcome to our customer service bot. What can I help you with today?",
True
)
until "User wants to check order status":
order_query = response.ask(
question="Is the user asking about checking their order status? Extract order number if mentioned.",
example={"is_order_status": true, "order_number": "ABC123"}
)
if order_query["is_order_status"]:
# Handle order status request
if "order_number" in order_query and order_query["order_number"]:
order_details = get_order_details(order_query["order_number"])
say(f"Your order {order_query['order_number']} is {order_details['status']}. Expected delivery: {order_details['delivery_date']}", exact=True)
else:
say("I'd be happy to check your order status. Could you please provide your order number?", exact=True)
break
until "User wants to make a return":
return_query = response.ask(
question="Is the user asking about making a return? Extract product details if mentioned.",
example={"is_return": true, "product": "Wireless Headphones"}
)
if return_query["is_return"]:
# Handle return request
say("I can help you process a return. Let me guide you through our return policy and steps.", exact=True)
# Additional return handling logic
break
until "User wants to speak to human agent":
agent_query = response.ask(
question="Does the user want to speak to a human agent?",
example={"wants_human": true}
)
if agent_query["wants_human"]:
say("I'll connect you with a customer service representative right away. Please hold for a moment.", exact=True)
transfer_to_agent()
break
Beyond simple string conditions, Based supports tool schema until conditions that allow you to define structured data extraction directly in the until clause. This enables the LLM to extract typed parameters when a condition matches, providing structured data for your flow logic.
Tool schemas can be defined using either a simplified format or the full OpenAI tool format:
Simplified Format:
# Define a tool schema for collecting user information
get_user_info = {
"name": "get_user_info",
"description": "Get user contact information",
"parameters": {
"type": "object",
"properties": {
"name": {"type": "string", "description": "User's name"},
"email": {"type": "string", "description": "User's email address"},
"phone": {"type": "string", "description": "User's phone number"}
},
"required": ["name"]
}
}
Full OpenAI Tool Format:
# Define using full OpenAI tool format
schedule_appointment = {
"type": "function",
"function": {
"name": "schedule_appointment",
"description": "Schedule an appointment for the user",
"parameters": {
"type": "object",
"properties": {
"date": {"type": "string", "description": "Appointment date (YYYY-MM-DD)"},
"time": {"type": "string", "description": "Appointment time (HH:MM)"},
"notes": {"type": "string", "description": "Optional appointment notes"}
},
"required": ["date", "time"]
}
}
}
Once you’ve defined a tool schema, you can use it in an until clause. The AI will match user intent to the tool’s description and extract the specified parameters.
Basic Syntax (without binding):
until tool_schema_variable:
# The condition matched, but parameters aren't captured
say("Action triggered!")
With Parameter Binding (using as):
until tool_schema_variable as extracted_params:
# extracted_params contains the parameters extracted by the LLM
print(extracted_params) # {"name": "John", "email": "[email protected]"}
Complete Example: Mixed Conditions
You can mix string conditions with tool schema conditions in the same loop. The first matching condition is triggered:
# Define tool schemas
get_contact_info = {
"name": "get_contact_info",
"description": "Collect user's contact information when they provide their name and email",
"parameters": {
"type": "object",
"properties": {
"name": {"type": "string", "description": "User's full name"},
"email": {"type": "string", "description": "User's email address"}
},
"required": ["name", "email"]
}
}
book_meeting = {
"name": "book_meeting",
"description": "Book a meeting when user wants to schedule a call or appointment",
"parameters": {
"type": "object",
"properties": {
"date": {"type": "string", "description": "Meeting date"},
"time": {"type": "string", "description": "Meeting time"},
"topic": {"type": "string", "description": "Meeting topic or agenda"}
},
"required": ["date", "time"]
}
}
# Use in conversation flow
say("Hello! I can help you with contact info or scheduling meetings.")
loop:
response = talk("How can I help you today?", True)
until "user wants to end the conversation":
say("Goodbye! Have a great day.")
until get_contact_info as contact:
# contact contains: {"name": "...", "email": "..."}
say(f"Thanks {contact['name']}! I've noted your email as {contact['email']}.", exact=True)
until book_meeting as meeting:
# meeting contains: {"date": "...", "time": "...", "topic": "..."}
say(f"Meeting booked for {meeting['date']} at {meeting['time']}.", exact=True)
if meeting.get("topic"):
say(f"Topic: {meeting['topic']}", exact=True)
Tool schema conditions provide type-safe parameter extraction. The LLM will attempt to extract all specified parameters based on the conversation context. Required parameters should be marked in the schema’s required array.
Conditional Flow Control
Based scripts can implement conditional flow control using standard Python syntax, allowing for dynamic conversation paths based on user responses.
# Determine recommendation approach based on user expertise and preferences
loop:
expertise_response = talk("How familiar are you with this book genre?", True)
until "User indicates their expertise level and reading preferences":
user_profile = expertise_response.ask(
question="Determine the user's expertise level and reading preferences.",
example={
"level": "beginner",
"prefers_series": true,
"likes_long_books": false
}
)
# Create a personalized recommendation strategy
if user_profile["level"] == "beginner":
if user_profile["prefers_series"]:
recommendations = get_beginner_series_recommendations(preferences["genre"])
say(f"Since you're new to {preferences['genre']} and enjoy series, I recommend starting with these accessible series:", exact=True)
else:
recommendations = get_beginner_standalone_recommendations(preferences["genre"])
say(f"For someone new to {preferences['genre']}, these standalone books are perfect introductions:", exact=True)
elif user_profile["level"] == "intermediate":
if user_profile["likes_long_books"]:
recommendations = get_intermediate_long_recommendations(preferences["genre"])
else:
recommendations = get_intermediate_short_recommendations(preferences["genre"])
else:
# Expert reader
recommendations = get_expert_recommendations(preferences["genre"])
say(f"For an expert reader like yourself, these critically acclaimed {preferences['genre']} books offer complex narratives:", exact=True)
# Display the recommendations
for i, book in enumerate(recommendations[:3]):
say(f"{i+1}. '{book['title']}' by {book['author']} - {book['description']}", exact=True)