Based Language Fundamentals
Welcome to the Based Language Fundamentals guide. This reference document provides a comprehensive explanation of Based’s core language constructs, their declaration syntax, arguments, and practical usage examples. Understanding these fundamentals will enable you to build sophisticated conversational agents with precision and confidence.
Core Language Constructs
Based is built around a set of specialized constructs designed specifically for conversational AI workflows. These constructs provide a high-level abstraction that makes it easy to build complex interactions without getting lost in implementation details.
The say Function
The say function generates a response from the AI to the user without expecting a reply. It’s typically used to provide information, instructions, or acknowledgments.
Syntax:
say(message, exact=False, model=None)
Parameters:
message (string): The content to be processed and presented to the user
exact (boolean, optional): Controls how the message is processed
True: Outputs exactly what’s provided in the message parameter, verbatim
False (default): Allows the AI to rephrase the message while maintaining its meaning
model (string, optional): Specifies which AI model to use for processing the message (when exact=False)
Return Value:
- Returns the response text, which can be stored in a variable for later use or simply executed for its side effect
Example:
# Greet the user with an exact message
say("Welcome to BookBot! I'm here to help you find and reserve books.", exact=True)
# Generate a dynamic welcome based on intent
say("Generate a friendly welcome for a user looking for book recommendations")
# Store the response for later use
intro = say("Introduce yourself as a helpful assistant", model="anthropic/claude-3.7-sonnet")
The loop, talk, and until Pattern
In Based, the loop, talk, and until constructs form an essential pattern that must be used together. This pattern creates interactive conversation flows that can repeat until specific conditions are met. The talk function is not meant to be used in isolation.
Syntax:
loop:
response = talk(
system_prompt,
first_prompt=True,
default_values={},
info={}
)
until "Description of the completion condition":
# Validation code that determines if the condition is met
# The loop continues until this code completes successfully
Parameters for talk:
system_prompt (string): Instruction or prompt that guides the conversation
first_prompt (boolean, optional): Controls conversation initiation
True (default): AI starts by sending the prompt message to the user
False: AI waits for the user to send a message first
default_values (dict, optional): Example values to structure expected responses
info (dict, optional): Additional context for the conversation
The Loop-Until Pattern:
- The
loop keyword begins a repeatable conversation block
- The
talk function within the loop handles the conversation exchange
- The
until clause specifies a condition (in natural language) under which the loop should end
- The code block after
until validates whether the condition has been met
- If the condition is met (the code executes successfully), the loop exits
- If the condition is not met, the loop repeats from the beginning
Example:
loop:
book_preference = talk(
"What genre of books do you enjoy reading?",
True,
{"genre": "mystery", "format": "paperback"}
)
until "User provides a valid book genre and format":
preference_data = book_preference.ask(
question="Extract the user's book genre and preferred format.",
example={"genre": "mystery", "format": "paperback"}
)
# Validate the genre and format
if preference_data["genre"] not in ["mystery", "sci-fi", "romance", "non-fiction"]:
print("Invalid genre provided. Re-prompting...")
continue
if preference_data["format"] not in ["paperback", "hardcover", "e-book", "audiobook"]:
print("Invalid format provided. Re-prompting...")
continue
# If we reach here, both genre and format are valid
print("Valid preferences received!")
Data Processing Methods
Based provides powerful methods to transform and extract information from data objects. These methods can be applied to any data object, not just conversation responses.
The .ask Method
The .ask method extracts structured data from any data object, transforming unstructured content into well-formed data that can be used programmatically. This method can be used with API responses, conversation results, or any other data.
Syntax:
data_object.ask(question, example=None, schema=None, model=None)
Parameters:
question (string): Instruction for extracting specific information from the data
example (dict, optional): Example object showing the expected output format
schema (dict, optional): JSON schema defining the expected structure
model (string, optional): AI model to use for extraction
Return Value:
- Returns structured data according to the example or schema provided
Example:
# Extract structured book preferences from a conversation response
preferences = response.ask(
question="Extract the user's preferred book genre, format, and any specific authors they mentioned.",
example={
"genre": "mystery",
"format": "audiobook",
"authors": ["Agatha Christie", "Arthur Conan Doyle"]
}
)
# Use .ask on an API response
response = requests.get(
url='https://bookstore-api.example.com/books',
headers={'Authorization': 'Bearer ' + auth_token},
params={'genre': 'mystery'}
)
api_results = response.json().ask(
question="Extract the book titles, authors, and prices from the API response.",
example={"books": [{"title": "The Mystery", "author": "A. Writer", "price": "$12.99"}]}
)
State Management and Persistence
Based automatically persists variables across conversation turns, allowing you to maintain context throughout a multi-turn conversation. Understanding how state works is essential for building stateful agents.
The state Dictionary
The state dictionary is the primary way to store and retrieve data that should persist across conversation turns. It’s automatically saved and restored between interactions.
Usage:
# Initialize state at the start of your flow
state = {}
# Store user information as you collect it
state["user_name"] = "John"
state["preferences"] = {"language": "en", "notifications": True}
state["order_items"] = []
# Add items to state throughout the conversation
state["order_items"].append({"item": "Taco", "quantity": 2})
# Access state later in the flow
total_items = len(state["order_items"])
say(f"You have {total_items} items in your cart.", exact=True)
Variable Persistence Example
Here’s a complete example showing how variables persist across conversation turns:
# First turn: Initialize and collect data
state = {}
say("Welcome! Let me help you place an order.")
loop:
response = talk("What would you like to order?", True)
until "user provides their order":
order = response.ask(
question="Extract the items the user wants to order",
example={"items": [{"name": "burger", "quantity": 1}]}
)
state["order"] = order
state["order_step"] = "collecting_details"
print(f"Order saved to state: {state['order']}")
# On subsequent turns, the state is automatically restored
# You can access state["order"] and continue from where you left off
Variables defined at the top level of your Based code (like state = {}) are automatically persisted across conversation turns. The session maintains the complete execution state, so your agent can pick up right where it left off.
Advanced Patterns
Multiple until Statements
Based allows for sophisticated conversation flows by supporting multiple until statements. Each until block represents a different condition and can trigger different handling paths.
# Multi-condition conversation handler example
loop:
response = talk(
"Welcome to our customer service bot. What can I help you with today?",
True
)
until "User wants to check order status":
order_query = response.ask(
question="Is the user asking about checking their order status? Extract order number if mentioned.",
example={"is_order_status": true, "order_number": "ABC123"}
)
if order_query["is_order_status"]:
# Handle order status request
if "order_number" in order_query and order_query["order_number"]:
order_details = get_order_details(order_query["order_number"])
say(f"Your order {order_query['order_number']} is {order_details['status']}. Expected delivery: {order_details['delivery_date']}", exact=True)
else:
say("I'd be happy to check your order status. Could you please provide your order number?", exact=True)
break
until "User wants to make a return":
return_query = response.ask(
question="Is the user asking about making a return? Extract product details if mentioned.",
example={"is_return": true, "product": "Wireless Headphones"}
)
if return_query["is_return"]:
# Handle return request
say("I can help you process a return. Let me guide you through our return policy and steps.", exact=True)
# Additional return handling logic
break
until "User wants to speak to human agent":
agent_query = response.ask(
question="Does the user want to speak to a human agent?",
example={"wants_human": true}
)
if agent_query["wants_human"]:
say("I'll connect you with a customer service representative right away. Please hold for a moment.", exact=True)
transfer_to_agent()
break
Beyond simple string conditions, Based supports tool schema until conditions that allow you to define structured data extraction directly in the until clause. This enables the LLM to extract typed parameters when a condition matches, providing structured data for your flow logic.
Tool schemas can be defined using either a simplified format or the full OpenAI tool format:
Simplified Format:
# Define a tool schema for collecting user information
get_user_info = {
"name": "get_user_info",
"description": "Get user contact information",
"parameters": {
"type": "object",
"properties": {
"name": {"type": "string", "description": "User's name"},
"email": {"type": "string", "description": "User's email address"},
"phone": {"type": "string", "description": "User's phone number"}
},
"required": ["name"]
}
}
Full OpenAI Tool Format:
# Define using full OpenAI tool format
schedule_appointment = {
"type": "function",
"function": {
"name": "schedule_appointment",
"description": "Schedule an appointment for the user",
"parameters": {
"type": "object",
"properties": {
"date": {"type": "string", "description": "Appointment date (YYYY-MM-DD)"},
"time": {"type": "string", "description": "Appointment time (HH:MM)"},
"notes": {"type": "string", "description": "Optional appointment notes"}
},
"required": ["date", "time"]
}
}
}
Once you’ve defined a tool schema, you can use it in an until clause. The AI will match user intent to the tool’s description and extract the specified parameters.
Basic Syntax (without binding):
until tool_schema_variable:
# The condition matched, but parameters aren't captured
say("Action triggered!")
With Parameter Binding (using as):
until tool_schema_variable as extracted_params:
# extracted_params contains the parameters extracted by the LLM
print(extracted_params) # {"name": "John", "email": "[email protected]"}
Complete Example: Mixed Conditions
You can mix string conditions with tool schema conditions in the same loop. The first matching condition is triggered:
# Define tool schemas
get_contact_info = {
"name": "get_contact_info",
"description": "Collect user's contact information when they provide their name and email",
"parameters": {
"type": "object",
"properties": {
"name": {"type": "string", "description": "User's full name"},
"email": {"type": "string", "description": "User's email address"}
},
"required": ["name", "email"]
}
}
book_meeting = {
"name": "book_meeting",
"description": "Book a meeting when user wants to schedule a call or appointment",
"parameters": {
"type": "object",
"properties": {
"date": {"type": "string", "description": "Meeting date"},
"time": {"type": "string", "description": "Meeting time"},
"topic": {"type": "string", "description": "Meeting topic or agenda"}
},
"required": ["date", "time"]
}
}
# Use in conversation flow
say("Hello! I can help you with contact info or scheduling meetings.")
loop:
response = talk("How can I help you today?", True)
until "user wants to end the conversation":
say("Goodbye! Have a great day.")
until get_contact_info as contact:
# contact contains: {"name": "...", "email": "..."}
say(f"Thanks {contact['name']}! I've noted your email as {contact['email']}.", exact=True)
until book_meeting as meeting:
# meeting contains: {"date": "...", "time": "...", "topic": "..."}
say(f"Meeting booked for {meeting['date']} at {meeting['time']}.", exact=True)
if meeting.get("topic"):
say(f"Topic: {meeting['topic']}", exact=True)
Tool schema conditions provide type-safe parameter extraction. The LLM will attempt to extract all specified parameters based on the conversation context. Required parameters should be marked in the schema’s required array.
Conditional Flow Control
Based scripts can implement conditional flow control using standard Python syntax, allowing for dynamic conversation paths based on user responses.
# Determine recommendation approach based on user expertise and preferences
loop:
expertise_response = talk("How familiar are you with this book genre?", True)
until "User indicates their expertise level and reading preferences":
user_profile = expertise_response.ask(
question="Determine the user's expertise level and reading preferences.",
example={
"level": "beginner",
"prefers_series": true,
"likes_long_books": false
}
)
# Create a personalized recommendation strategy
if user_profile["level"] == "beginner":
if user_profile["prefers_series"]:
recommendations = get_beginner_series_recommendations(preferences["genre"])
say(f"Since you're new to {preferences['genre']} and enjoy series, I recommend starting with these accessible series:", exact=True)
else:
recommendations = get_beginner_standalone_recommendations(preferences["genre"])
say(f"For someone new to {preferences['genre']}, these standalone books are perfect introductions:", exact=True)
elif user_profile["level"] == "intermediate":
if user_profile["likes_long_books"]:
recommendations = get_intermediate_long_recommendations(preferences["genre"])
else:
recommendations = get_intermediate_short_recommendations(preferences["genre"])
else:
# Expert reader
recommendations = get_expert_recommendations(preferences["genre"])
say(f"For an expert reader like yourself, these critically acclaimed {preferences['genre']} books offer complex narratives:", exact=True)
# Display the recommendations
for i, book in enumerate(recommendations[:3]):
say(f"{i+1}. '{book['title']}' by {book['author']} - {book['description']}", exact=True)
Based supports different deployment platforms (chat, voice, email, SMS) and provides specialized functions for each platform. These functions allow you to take advantage of platform-specific capabilities.
Voice Deployment Functions
When your Based agent is deployed for voice conversations, you can use these special functions to control call flow:
transfer(phone_number, options?)
Transfers the current call to another phone number. Optionally supports dialing extensions after the call connects.
Syntax:
transfer(phone_number)
transfer(phone_number, extension)
transfer(phone_number, options)
Parameters:
phone_number (string): The destination phone number to transfer to
extension (string, optional): Simple extension digits to dial after the call connects
options (dict, optional): Advanced transfer options with the following keys:
extension (string): DTMF digits to send after the call connects
pauseSeconds (number): Seconds to wait before sending digits (default: 1 second)
Examples:
# Basic transfer - transfer to customer support
if user_request["needs_human_support"]:
say("I'll transfer you to our customer support team right away.", exact=True)
transfer("+1-800-123-4567")
# Transfer with extension (simple string format)
# Waits 1 second (default), then dials extension 123
say("Let me transfer you to Julie in the Finance department.", exact=True)
transfer("5302321272", "271")
# Transfer with extension and custom pause time (dict format)
# Waits 2 seconds before dialing the extension (useful for slower phone systems)
say("Connecting you to the service department now.", exact=True)
transfer("5303195426", {"extension": "221", "pauseSeconds": 2})
# Transfer with no pause (immediate extension dialing)
transfer("5302323297", {"extension": "123", "pauseSeconds": 0})
When transferring to extensions, the pauseSeconds parameter controls how long to wait after the call connects before dialing the extension digits. The default of 1 second works for most phone systems, but you may need to increase this for systems that have longer greeting messages or slower IVR responses.
end_call()
Ends the current call immediately. Use this to gracefully terminate a voice conversation after completing the interaction.
Syntax:
Examples:
# End call after completing a transaction
say("Thank you for your order! Your confirmation number is ABC123. Have a great day!", exact=True)
end_call()
# End call when user requests to hang up
if user_request["wants_to_end_call"]:
say("Thank you for calling. Goodbye!", exact=True)
end_call()
# End call after transferring to voicemail or completing a task
say("I've sent the information to your email. Is there anything else I can help with?", exact=True)
loop:
response = talk("", False) # Wait for user response
until "User confirms they're done":
done = response.ask(
question="Is the user indicating they're done and want to end the call?",
example={"is_done": true}
)
if done["is_done"]:
say("Great, have a wonderful day!", exact=True)
end_call()
Built-in Utility Functions
Based provides built-in utility functions that are available in all deployments for common operations like debugging, notifications, and more.
Print Line Debugging
The print function works like Python’s standard print, but outputs are captured and made available in the session trace for debugging purposes. This is invaluable for understanding flow execution and troubleshooting issues.
Syntax:
Usage:
# Print simple messages for debugging
print("Starting order processing...")
# Print variable values to inspect state
order_data = response.ask(
question="Extract order details",
example={"items": [], "total": 0}
)
print("Extracted order:", order_data)
# Print within conditional logic to trace execution path
if order_data["total"] > 100:
print("High-value order detected, applying discount")
discount = order_data["total"] * 0.1
print(f"Discount amount: ${discount}")
All print outputs appear in the session trace view, making it easy to debug conversation flows without interrupting the user experience. Print statements do not send messages to the user—they’re purely for developer debugging.
Sending SMS Messages with send_sms
The send_sms function allows you to send SMS messages programmatically from within your Based flow. This is useful for sending confirmations, notifications, or follow-up messages.
Syntax:
result = await send_sms(
to="recipient_phone_number",
content="message_content",
from_number="your_phone_number"
)
Parameters:
to (string, required): The recipient’s phone number in E.164 format (e.g., "+12025551234")
content (string, required): The SMS message content to send
from_number (string, required): Your phone number from your Brainbase phone number library
Return Value:
Returns an SMSResult object with the following properties:
| Property | Type | Description |
|---|
success | boolean | True if SMS was sent successfully |
status | string | Status code: "sent", "failed", "skipped", or "error" |
message_sid | string | Twilio message SID (if sent successfully) |
to | string | Recipient phone number |
from_number | string | Sender phone number |
error | string | Error message (if failed) |
error_code | string | Provider error code (if applicable) |
Example:
# Send an order confirmation SMS
result = await send_sms(
to=customer_phone,
content=f"Your order #{order_id} has been confirmed! Estimated delivery: {delivery_time}",
from_number="+15551234567"
)
if result.success:
print(f"SMS sent successfully: {result.message_sid}")
say("I've sent a confirmation to your phone!", exact=True)
else:
print(f"SMS failed: {result.error}")
say("I wasn't able to send an SMS confirmation, but your order is still confirmed.", exact=True)
Handling Failures Gracefully:
The send_sms function is designed to never interrupt your flow. All errors are captured in the result object, allowing you to handle failures gracefully:
# Example: Appointment reminder with fallback
result = await send_sms(
to=patient_phone,
content=f"Reminder: Your appointment with Dr. {doctor_name} is tomorrow at {appointment_time}",
from_number="+15559876543"
)
if result.success:
say("I've sent you an SMS reminder for your appointment.", exact=True)
elif result.status == "skipped":
# SMS was skipped (e.g., A2P verification issue)
print(f"SMS skipped: {result.error}")
say("I'll make sure to remind you about your appointment.", exact=True)
else:
# SMS failed for another reason
print(f"SMS error: {result.error}")
say("Your appointment is confirmed. Please make a note of the time.", exact=True)
The from_number must be a phone number registered in your Brainbase phone number library with proper A2P (Application-to-Person) verification for SMS delivery compliance.
Making HTTP Requests
Based provides the standard Python requests module for making HTTP requests to external APIs. This gives you full compatibility with the widely-used requests library.
Examples:
# GET request
response = requests.get(
url="https://api.example.com/users/123",
headers={"Authorization": "Bearer your-token"}
)
if response.ok:
user_data = response.json()
print(f"User: {user_data['name']}")
# POST request with JSON body
response = requests.post(
url="https://api.example.com/orders",
headers={
"Content-Type": "application/json",
"Authorization": "Bearer your-token"
},
json={
"items": order_items,
"customer_id": customer_id
}
)
if response.ok:
order = response.json()
say(f"Order {order['id']} created successfully!", exact=True)
else:
print(f"Request failed: {response.status_code} - {response.text}")
# Other HTTP methods
response = requests.put(url, headers=headers, json=data)
response = requests.delete(url, headers=headers)
response = requests.patch(url, headers=headers, json=data)
Using .ask() with API Responses:
You can use the .ask() method on response data to extract structured information:
response = requests.get("https://api.example.com/products")
products = response.json()
# Extract specific fields using .ask()
summary = products.ask(
question="Extract the top 3 products by price with name and price",
example={"products": [{"name": "Widget", "price": 29.99}]}
)
All HTTP requests are automatically logged in the session trace for debugging and observability. You can see request/response details, timing, and any errors in the trace view.
Legacy API Utility (Deprecated)
The api utility is deprecated. Please use the standard requests module instead.
The legacy api.get_req() and api.post_req() methods are still supported for backwards compatibility:
# Deprecated - use requests.get() instead
result = api.get_req(url='https://api.example.com/data', headers={...})
# Deprecated - use requests.post() instead
result = api.post_req(url='https://api.example.com/data', headers={...}, body={...})
Third-Party Integrations
Based provides an integrations client for connecting to third-party services configured in your Brainbase workspace.
Usage:
# Call an integration action (app_name.action_name pattern)
result = await integrations.slack.send_message(
channel="#notifications",
text=f"New order received: {order_id}"
)
# Send email via Gmail
result = await integrations.gmail.send_email(
to="[email protected]",
subject="Order Confirmation",
body=f"Your order #{order_id} has been confirmed!"
)
Integrations must be connected in your Brainbase workspace before they can be used in Based flows. See the Integrations documentation for setup instructions.
Full Example: Book Recommendation Agent
Here’s a complete example that demonstrates the various language constructs working together, including multiple until statements:
state = {}
meta_prompt = "You're a book recommendation assistant helping users find their next great read."
res = say("Hello! I'm BookBot, your personal book recommendation assistant.", exact=True)
# Introduce the service and set expectations
say("I can help you find books based on your preferences, including genre, format, and reading level.")
# Collect initial user preferences with multiple until paths
loop:
initial_response = talk(
f"{meta_prompt} Ask the user what they're looking for today, offering to recommend books, find new releases, or check book availability.",
True
)
until "User wants book recommendations":
recommendation_request = initial_response.ask(
question="Is the user asking for book recommendations?",
example={"wants_recommendations": true}
)
if recommendation_request["wants_recommendations"]:
# Handle recommendation path
state["intent"] = "recommendations"
# Collect genre preferences
loop:
genre_response = talk(
"What genre of books do you enjoy reading?",
True,
{"genre": "fantasy", "format": "e-book"}
)
until "User provides valid genre and format preferences":
preferences = genre_response.ask(
question="Extract the user's preferred book genre and format.",
example={"genre": "fantasy", "format": "e-book"}
)
if preferences["genre"] and preferences["format"]:
state["preferences"] = preferences
break
# Generate recommendations
response = requests.get(
url='https://bookstore-api.example.com/recommendations',
params=state["preferences"]
)
recommendations = response.json().ask(
question="Extract the top 3 book recommendations with title, author, and description.",
example={"books": [{"title": "Book Title", "author": "Author Name", "description": "Brief description"}]}
)
# Present recommendations
say(f"Based on your interest in {state['preferences']['genre']} books, here are 3 titles I think you'll love:", exact=True)
for i, book in enumerate(recommendations["books"]):
say(f"{i+1}. '{book['title']}' by {book['author']}: {book['description']}", exact=True)
break
until "User wants to check new releases":
new_release_request = initial_response.ask(
question="Is the user asking about new or upcoming book releases?",
example={"wants_new_releases": true, "genre": "thriller"}
)
if new_release_request["wants_new_releases"]:
# Handle new releases path
state["intent"] = "new_releases"
genre = new_release_request.get("genre", "")
# Get new releases, optionally filtered by genre
response = requests.get(
url='https://bookstore-api.example.com/new-releases',
params={"genre": genre} if genre else {}
)
new_releases = response.json().ask(
question="Extract the latest 5 book releases with title, author, and release date.",
example={"books": [{"title": "New Book", "author": "Author Name", "release_date": "2023-10-15"}]}
)
# Present new releases
header = f"Here are the latest releases in {genre}:" if genre else "Here are the latest book releases:"
say(header, exact=True)
for i, book in enumerate(new_releases["books"]):
say(f"{i+1}. '{book['title']}' by {book['author']} - Released: {book['release_date']}", exact=True)
break
until "User wants to check book availability":
availability_request = initial_response.ask(
question="Is the user asking about checking if a specific book is available?",
example={"checking_availability": true, "book_title": "The Great Novel", "author": "Famous Writer"}
)
if availability_request["checking_availability"]:
# Handle availability check path
state["intent"] = "check_availability"
book_info = {}
if "book_title" in availability_request:
book_info["title"] = availability_request["book_title"]
if "author" in availability_request:
book_info["author"] = availability_request["author"]
# If we have complete information, check availability
if "title" in book_info and "author" in book_info:
availability = check_book_availability(book_info["title"], book_info["author"])
if availability["available"]:
say(f"Good news! '{book_info['title']}' by {book_info['author']} is available in these formats: {', '.join(availability['formats'])}", exact=True)
else:
say(f"I'm sorry, '{book_info['title']}' by {book_info['author']} is currently unavailable. Would you like me to notify you when it becomes available?", exact=True)
else:
# Need more information
loop:
book_details_response = talk(
"I'd be happy to check book availability. Could you please provide the book title and author?",
True
)
until "User provides complete book details":
details = book_details_response.ask(
question="Extract the book title and author from the user's response.",
example={"title": "The Great Novel", "author": "Famous Writer"}
)
if "title" in details and "author" in details:
availability = check_book_availability(details["title"], details["author"])
if availability["available"]:
say(f"Good news! '{details['title']}' by {details['author']} is available in these formats: {', '.join(availability['formats'])}", exact=True)
else:
say(f"I'm sorry, '{details['title']}' by {details['author']} is currently unavailable. Would you like me to notify you when it becomes available?", exact=True)
break
break
# Conversation wrap-up
say("Is there anything else I can help you with today?", exact=True)
Conclusion
The Based language provides a powerful yet intuitive framework for building conversational agents. By mastering the core constructs—particularly the essential loop-talk-until pattern—you can create sophisticated conversation flows that handle complex interactions while maintaining readability and maintainability.
Remember that Based is designed to be declarative, allowing you to focus on the “what” rather than the “how” of conversational AI. This approach dramatically reduces the amount of code needed to create powerful agents while increasing reliability and ease of maintenance.
The combination of the core language constructs with platform-specific functions allows you to build agents that take full advantage of each deployment platform’s unique capabilities while maintaining a consistent codebase and user experience.