top of page

How AI Chatbots Work: Complete Guide to Their Structure, Intelligence, and Deployment

Here we share a practical and detailed guide to AI chatbots: let's explore how they process language, understand user intent, manage memory, generate responses, and integrate with real-world systems across web, mobile, and voice platforms.



INTRODUCTION:

AI chatbots are programs that chat with people using everyday language; they save time by giving quick answers and can work all day without breaks.
They understand what we say by breaking our words into pieces, spotting what we want, and picking out details like dates or places; over time they learn from many chats and get better at helping us.
A chatbot’s design has three parts: the screen we see, a middle part that moves messages around, and smart language models that create the replies; keeping these parts separate makes updates easier.
To plan a good chat, builders draw simple maps that show each question and answer, add prompts to gather missing info, and include safety steps in case the bot gets confused.
The bot learns from real conversations that humans label with goals and key facts; clear, private, and balanced data helps the bot stay fair and accurate.
Finally, chatbots link to company systems to look up orders or book services, and they can run on websites, phone apps, or voice assistants so people can type, tap, or talk wherever they prefer.

Here we will treat these specific topics:

  1. Definition and purpose of AI chatbots;

  2. Core technologies: NLP, NLU, machine learning;

  3. Architecture overview: front-end, middleware, models;

  4. Designing conversation flows;

  5. Training data collection and annotation;

  6. Intent and entity extraction;

  7. Context and memory management;

  8. Response generation techniques: rule-based, retrieval, generative;

  9. Multimodal inputs and outputs;

  10. Integration with external APIs and databases;

  11. Deployment channels: web, mobile, voice assistants.


_____________________

1. Definition and Purpose of AI Chatbots

An AI chatbot is a software application designed to simulate human conversation using artificial intelligence techniques.

It allows users to interact with digital systems through natural language, either via text or speech.

These chatbots are built to understand, process, and respond to user inputs in a conversational way.


Unlike rule-based bots that rely on fixed commands, AI chatbots use machine learning and natural language processing (NLP) to adapt and respond more flexibly.

The main goal of an AI chatbot is to assist, automate, or enhance communication between humans and machines.

They are widely used in customer service, business automation, education, healthcare, and productivity tools.

AI chatbots can function as virtual assistants, helping users schedule appointments, provide information, or carry out tasks like making bookings.


They can also act as internal support tools, helping employees access company knowledge or complete internal processes faster.

The purpose of these chatbots can be broken down into several categories:

Efficiency: Reducing the time and effort required to access services or data;

Scalability: Handling thousands of queries simultaneously without added human effort;

Availability: Providing 24/7 assistance, independent of human working hours;

Consistency: Ensuring uniform responses, minimizing errors or bias from human staff.


AI chatbots are also used for personalized experiences, tailoring responses based on user history or preferences.

This makes them valuable tools for marketing, sales, and customer engagement, where relevance and speed are crucial.

AI chatbots are a bridge between humans and digital systems, offering a more natural, fast, and effective way to interact with technology.


2. Core Technologies: Natural Language Processing (NLP), Natural Language Understanding (NLU), Machine Learning

At the heart of every AI chatbot lie core technologies that enable it to understand and respond to human language.

The most essential components are Natural Language Processing (NLP), Natural Language Understanding (NLU), and Machine Learning (ML).

Natural Language Processing (NLP) is the field of artificial intelligence that focuses on enabling machines to read, interpret, and generate human language.


It involves several sub-processes such as tokenization, part-of-speech tagging, syntactic parsing, and sentiment analysis.

These steps help the chatbot break down a sentence into understandable parts and extract relevant information.

Natural Language Understanding (NLU) is a subfield of NLP that deals specifically with interpreting the meaning behind user input.

It enables the chatbot to identify intents (what the user wants) and entities (specific data like names, dates, or places).


For example, in the sentence “Book a flight to Paris next Monday,” the intent is booking and the entities are location (Paris) and date (next Monday).

NLU allows the chatbot to move beyond keywords and truly grasp the user's objective, even when phrased in different ways.

Machine Learning (ML) enables chatbots to improve over time by learning from past interactions and data.

With ML, the chatbot can identify patterns in user behavior, predict responses, and adapt to new types of queries.

There are two main types of learning used in AI chatbots: supervised learning (where the bot is trained on labeled data) and reinforcement learning (where the bot improves based on feedback or rewards).

These technologies work together to create a chatbot that is not only responsive but also intelligent, adaptable, and context-aware.


In many advanced systems, these capabilities are powered by large language models such as GPT, which use deep learning and vast datasets to generate human-like responses.

The result is a system that can understand nuance, context, and intent, making conversations smoother and more natural.


3. Architecture Overview: Front-End, Middleware, Models

An AI chatbot’s architecture is typically divided into three major layers: the Front-End, the Middleware, and the Model Layer.

The Front-End layer encompasses the user-facing interfaces such as web widgets, mobile apps, messaging platforms, and voice assistants.

It handles input capture—collecting text or speech—and output rendering, displaying responses or synthesizing speech back to the user.


Channel adapters in this layer normalize incoming messages into a common internal format, ensuring consistent processing regardless of source.

Customizable UI components control branding, accessibility features, and real-time typing indicators, creating a seamless conversational experience.

The Middleware layer acts as the orchestrator, routing requests between the front-end and the underlying language models.

Key subcomponents include a NLU engine for intent and entity extraction, a dialogue manager for state tracking, and a business logic handler for workflow execution.

It can also integrate a policy manager that enforces security, privacy, and compliance rules before any data reaches external services.


API gateways within the middleware abstract calls to third-party services—CRMs, ERPs, knowledge bases—so the chatbot can fetch data or trigger actions.

A state store or session memory persists context across turns, enabling multi-step conversations and personalized interactions.

The Model layer hosts the core language models, ranging from fine-tuned transformer-based models (e.g., GPT variants) to smaller domain-specific classifiers.

Here, components such as retrieval-augmented generation (RAG) combine vector search with generative models to inject up-to-date knowledge into responses.


Scalable deployment options include serverless functions, container clusters, or dedicated GPU nodes, each selected for latency, cost, and throughput requirements.

Model monitoring pipelines track performance metrics—latency, token usage, hallucination rates—and feed logs back to training workflows for continuous improvement.

Together, these three layers create a modular architecture that separates user experience, conversation logic, and intelligence, making the chatbot maintainable, extensible, and resilient.


4. Designing Conversation Flows

A conversation flow is the structured path a chatbot follows to guide users from their initial query to a successful outcome.

Effective design begins with user research, gathering real questions, pain points, and preferred phrasing to ensure the flow mirrors authentic dialogue.

Conversation architects map intents as high-level goals, then branch these into sub-intents and contextual prompts to cover alternative phrasings and edge cases.


Each turn in the flow pairs a system prompt with expected user utterances, creating clear if-then transitions that keep dialogues coherent.

A visual tool such as a flowchart or state diagram helps stakeholders review logic, spot dead ends, and enforce a single source of truth for conversation logic.

Designers weave in slot-filling steps, prompting for missing entities—dates, amounts, locations—while maintaining a natural tone and offering examples to reduce friction.

To handle ambiguity, flows include clarification nodes that ask follow-up questions when the intent confidence falls below a defined threshold.


Robust flows plan for fallback scenarios, providing graceful apologies, helpful suggestions, or escalation to human agents when the chatbot cannot satisfy the request.

Context management rules decide when to retain, override, or forget previous information, preventing stale data from polluting new tasks.

Personalization layers insert user-specific data—names, preferences, past orders—by calling

backend APIs, making responses feel tailored without sacrificing privacy.


Multimodal considerations adapt phrasing and pacing for channels such as voice, where brevity, re-prompting, and confirmation prompts are critical for usability.

Finally, each flow is annotated with metrics—completion rates, dropout points, average turns—feeding analytics dashboards for iterative refinement and A/B testing.


5. Training Data Collection and Annotation

High-quality training data is the foundation that determines how accurately a chatbot will recognize intents and generate helpful responses.

Teams start by gathering real user transcripts, support tickets, emails, and FAQ documents to mirror authentic language patterns and domain jargon.

Synthetic utterances are then crafted with paraphrasing techniques to broaden coverage while avoiding duplication that can skew model learning.


A data schema defines core attributes—intent label, entity spans, context tags, channel source—ensuring every sample follows consistent structure.

Privacy policies mandate anonymization pipelines that strip or mask personally identifiable information before data reaches annotation tools.

Balanced datasets guard against bias by including diverse demographics, dialects, and edge cases, preventing the model from overfitting to dominant groups.

For each utterance, professional annotators assign intent codes and outline entity boundaries using specialized web interfaces with span-highlighting support.

Annotation guidelines provide decision trees and concrete examples that standardize judgments on ambiguous phrases, thereby increasing inter-annotator agreement scores.


Quality assurance employs dual-pass review cycles where a second annotator verifies labels, and arbitration resolves conflicts to raise overall precision.

Metrics such as Cohen’s κ and F1-score on a sampled gold set quantify labeling reliability and uncover systematic misunderstandings in guidelines.

Data augmentation techniques—slot value swapping, synonym injection, back translation—expand minority intents, improving recall without ballooning annotation budgets.

Version control systems store dataset snapshots with change logs, making it easy to trace how model performance correlates with specific data revisions.


Regular data refreshes pull in new utterances from production logs, capturing emerging terms and seasonal topics that were absent during initial collection.

Ethical reviews examine the corpus for toxic language, hate speech, or sensitive content, flagging samples that need exclusion or special handling.

Legal teams check that all data sources comply with copyright, terms of service, and regional data protection laws, safeguarding against downstream liability.

Final datasets are split into train, validation, and test partitions using stratified sampling that preserves intent distribution across splits.


Each partition is exported in machine-readable formats such as JSONL or Apache Arrow, ready for ingestion by model training pipelines.

Well-curated and thoroughly annotated data delivers chatbots that are robust, unbiased, and contextually aware, setting the stage for reliable conversational AI deployments.


6. Intent and Entity Extraction

Intent and entity extraction is the process by which a chatbot detects what the user wants and identifies key data points embedded in the utterance.

An intent represents the user’s goal, such as booking a flight or checking an order status.

An entity captures specific details that refine that goal, like dates, locations, monetary amounts, or product names.

Modern systems begin with a tokenization phase, splitting the sentence into word pieces or sub-words that preserve punctuation and contractions.


A vectorization step converts tokens into numerical embeddings using pre-trained language models, providing rich contextual signals for downstream classification.

The intent classifier applies algorithms ranging from logistic regression and support vector machines to deep neural networks like transformers fine-tuned on domain data.

During training, utterances are labeled with intent tags, and the model learns to map high-dimensional embeddings to probability distributions over the intent set.

Confidence thresholds are calibrated on validation data so that low-certainty predictions can trigger clarification questions or fallback flows.

Entity recognition often employs sequence labeling models such as CRF-enhanced BiLSTMs or token-level transformers with a BIO tagging scheme.

Custom gazetteers and regular expressions supplement machine learning by catching well-structured entities like phone numbers, emails, and SKU codes.

In multilanguage deployments, cross-lingual embeddings enable zero-shot or few-shot transfer, reducing annotation costs for each additional locale.


Extracted entities feed into a slot-filling framework, where each slot is initialized, confirmed, or overridden based on dialogue context and user corrections.

Entity post-processing routines handle normalization, converting free-form inputs such as “next Friday” into ISO timestamps or mapping synonyms to canonical IDs.

A composite intent mechanism allows the chatbot to detect user requests containing multiple goals in a single sentence and split them into atomic actions.

For complex domains, ontology-based mapping aligns extracted entities with backend schemas, ensuring consistent data exchange between the chatbot and enterprise systems.

Real-time co-reference resolution links pronouns or shortened phrases to previously mentioned entities, maintaining coherence across turns.


Metrics like intent accuracy, entity F1-score, and slot-filling completion rate appear in monitoring dashboards, guiding iterative dataset expansion.

Continuous learning loops incorporate human-in-the-loop review queues, where misclassified intents or missed entities are corrected and re-ingested for model fine-tuning.

Robust intent and entity extraction delivers chatbots that understand user goals precisely, minimize follow-up questions, and streamline task completion.


7. Context and Memory Management

Context management ensures the chatbot maintains awareness of conversational history so it can deliver coherent, relevant answers across multiple turns.

Each dialogue is tracked in a session state, a structured data object that records intents, filled slots, and system prompts for the current conversation.

Short-term memory stores recent user utterances verbatim or in summarized form, enabling the bot to resolve pronouns and elliptical questions such as “What about tomorrow?”.

Long-term memory contains lasting facts—user preferences, profile attributes, previous orders—that persist beyond sessions and personalize future interactions.

A hierarchical context stack organizes data by scope, distinguishing between turn-level variables, session-level variables, and user-profile variables to prevent leakage of information across users.


For efficiency, the chatbot compresses dialogue history with context windows or vector embeddings, feeding only the most relevant snippets back into large language models to reduce token load.

Conversational summaries are generated with abstractive models that distill lengthy threads into concise bullet points, improving performance while preserving essential meaning.

A context expiration policy sets time-to-live parameters, automatically clearing or archiving stale sessions after periods of inactivity to optimize storage.

Explicit handoff markers let developers reset context when switching topics, preventing residual intents from influencing new tasks and creating unexpected outputs.

User consent mechanisms prompt customers before storing sensitive data such as addresses or payment preferences, aligning memory practices with privacy regulations.

Encryption at rest and role-based access controls secure memory stores, restricting retrieval of personally identifiable information to authorized services only.

When the chatbot integrates with external systems, context transformers map internal variables to API payloads and back again, ensuring consistency between the conversational layer and backend workflows.


Embeddings-based retrieval allows the bot to match new queries against a vector database of past interactions, surfacing relevant memories without linear scans of raw text.

A forgetting strategy periodically prunes low-value or outdated entries, guided by heuristics such as usage frequency, recency, and relevance scores.

Data governance frameworks define audit trails for memory operations, logging read and write events so compliance teams can trace how user information was propagated.

Context conflict resolution algorithms prioritize newer or higher-confidence variables when multiple sources provide overlapping information, maintaining conversational integrity.


During live chats, thread-level locks prevent race conditions where simultaneous user messages could corrupt shared session states in multi-shard deployments.

Performance dashboards expose metrics like context hit rate, average memory retrieval latency, and contextual error count, driving continuous optimization of memory pipelines.

Effective context and memory management enables chatbots to link past, present, and future interactions, delivering experiences that feel natural, personalized, and secure.


8. Response Generation Techniques: Rule-Based, Retrieval, Generative

AI chatbots rely on three primary response generation techniquesrule-based, retrieval-based, and generative—each offering distinct strengths and trade-offs.

A rule-based system produces replies through manually crafted patterns or decision trees that map specific inputs to predefined outputs;

its deterministic nature guarantees consistent phrasing and predictable behavior, making it ideal for narrow domains with strict compliance requirements;

however, rule sets become brittle at scale because every new variation demands additional rules, causing maintenance overhead and coverage gaps.


A retrieval-based approach selects the best reply from a curated repository of candidate utterances by computing semantic similarity between the user input and stored responses;

vector search techniques such as approximate nearest-neighbor (ANN) over sentence embeddings accelerate matching even across millions of candidates;

ranking models incorporate dialogue context, intent confidence, and freshness scores to surface the most relevant answer while filtering outdated content.

Retrieval ensures high factual accuracy when the repository is frequently updated, yet it struggles with queries outside the knowledge base or requiring novel phrasing.

A generative model—typically a transformer—constructs replies token by token, conditioned on the conversation history and optional system prompts;

large language models fine-tuned on domain data excel at paraphrasing, synthesizing information, and handling unforeseen queries without handcrafted rules;

sampling strategies like top-k, nucleus sampling, or beam search balance creativity against coherence, while temperature controls response diversity.


Generative systems risk hallucination and require post-generation guardrails such as toxicity filters, content validation pipelines, or retrieval-augmented generation (RAG) that inject authoritative facts.

Hybrid architectures combine techniques in a cascading policy, first attempting rule-based or retrieval responses for high-confidence cases, then falling back to generative output when coverage gaps appear;

this layered design maximizes precision, reduces hallucinations, and maintains stylistic consistency across diverse conversation topics.


Engineering teams monitor metrics such as exact match rate, factuality score, average response latency, and user satisfaction surveys to tune the blend and sequencing of generation strategies;

continuous online evaluation with A/B tests and human feedback loops drives iterative refinement of both retrieval repositories and generative checkpoints;

strategic selection and orchestration of these response techniques empower chatbots to handle routine tasks reliably while remaining flexible enough to tackle novel or complex requests.


9. Multimodal Inputs and Outputs

A multimodal chatbot can interpret and generate multiple forms of communication—text, speech, images, video, and structured data—allowing users to interact through the most convenient channel at any moment.

The input pipeline often begins with automatic speech recognition (ASR) that converts spoken language into text while preserving punctuation, speaker labels, and time codes for downstream processing.

When images are supplied, a computer vision encoder such as a convolutional neural network or a vision transformer extracts semantic features—objects, text regions, facial expressions—that the language model can reference during response generation.


Video inputs are handled by sampling key frames for image analysis and transcribing embedded audio, then aligning both streams in a temporal context graph so the chatbot understands events over time.

Structured files—PDFs, spreadsheets, CAD drawings—are parsed by specialized extractors that transform tables, diagrams, and metadata into machine-readable JSON objects attached to the dialogue context.

On the output side, the chatbot uses text-to-speech (TTS) engines with neural vocoders that dynamically adjust pitch, pace, and emotion, producing speech that matches content and brand voice guidelines.

For visual responses, the system can generate annotated images, charts, or infographics, overlaying highlights and callouts to direct user attention to essential details.

In augmented reality scenarios, the chatbot renders context-aware overlays—tooltips, navigation arrows, or safety warnings—onto live camera feeds, leveraging device sensors for spatial alignment.


A multimodal fusion module aligns modalities through shared embeddings, attention mechanisms, or cross-modal transformers, allowing the language model to reference visual or auditory cues while crafting contextually rich replies.

Latency budgets are carefully managed; computationally intensive vision tasks run on edge GPUs or batched cloud inference, while lighter NLP operations execute closer to the user to maintain conversational pace.

Fallback strategies offer alternative modalities when one fails or is unavailable—text captions accompany audio responses for accessibility, and voice narrations describe images for visually impaired users.

Security controls sanitize multimodal content, screening images for objectionable material and scanning audio streams for sensitive data before storage or further processing.

Analytics dashboards track modality usage patterns, revealing which combinations drive higher task completion, lower abandonment, and greater user satisfaction.


Compliance teams validate that multimodal logs respect privacy laws by masking biometric identifiers and encrypting media artifacts at rest and in transit.

Designers craft multimodal dialogue guidelines that specify when to switch or blend modalities, preventing cognitive overload and ensuring each mode complements rather than duplicates information.

Successful multimodal integration delivers chatbots that feel intuitive and accessible, meeting users where they are and adapting fluidly to the context of each interaction.



10. Integration with External APIs and Databases

A chatbot gains real utility when it integrates with external APIs and databases, transforming static dialogue into actionable workflows.

Developers establish REST or GraphQL endpoints that expose business functions such as order creation, account lookup, or ticket updates.

The middleware layer houses an API client library that manages authentication, retries, rate-limit handling, and error normalization.


OAuth 2.0, API keys, or mutual TLS provide secure authentication, and tokens are refreshed automatically to avoid conversation interruptions.

A parameter mapping engine converts extracted entities—dates, IDs, monetary values—into payload fields that the target service expects.

For complex requests, a data transformation step reshapes nested JSON or XML structures, flattening or nesting attributes as required by downstream systems.

Chatbots often call multiple services in sequence, orchestrated by a lightweight workflow engine that supports conditional branches, loops, and parallel calls.


Responses return to the chatbot as structured data; a templating layer formats this data into human-readable text, rich cards, or interactive UI elements.

Database integration begins with a data access layer (DAL) that abstracts SQL or NoSQL operations, shielding dialogue logic from vendor-specific queries.

Connection pools and read-write splitting improve performance, sending heavy analytics queries to replicas while writes hit the primary node.

Parameter-ized queries and ORM-level sanitization mitigate injection risks, ensuring user inputs cannot compromise the data store.

When conversations require long-running transactions—such as multi-step sign-ups—two-phase commits or saga patterns maintain consistency across services.

Caching layers like Redis or Memcached store frequent lookups—product catalogs, exchange rates—reducing latency and external API costs.

A webhook subsystem listens for asynchronous events—shipment updates, payment confirmations—and injects proactive messages back into the user thread.

Circuit breakers and bulkhead patterns isolate failing services, returning graceful degradation messages rather than exposing raw errors.

Real-time monitoring captures metrics such as API latency, error rate, throughput, and timeout frequency, feeding alerts into observability dashboards.

Audit logs record every external call with masked parameters and timestamps, supporting compliance requirements and forensic investigations.

Rate-limit strategies employ token buckets or leaky buckets, queuing excess requests and informing users of expected wait times when quotas are exhausted.


Versioning policies pin endpoints to v-specific paths or headers, preventing breaking changes in partner APIs from cascading into chatbot failures.

Schema evolution is managed with contract tests that validate payloads against OpenAPI or JSON-Schema definitions during continuous integration pipelines.

Integration test suites spin up mock servers and stub databases to simulate external dependencies, enabling deterministic testing without live credentials.

Encryption of data in transit uses TLS 1.3, while encryption at rest follows enterprise standards such as AES-256 with customer-managed keys.


Secrets management tools—HashiCorp Vault, AWS Secrets Manager, Azure Key Vault—store API keys and database passwords, injecting them into runtime containers on demand.

Compliance checkpoints ensure integrations respect GDPR, HIPAA, or PCI-DSS, masking or tokenizing sensitive fields before persistence or outbound transmission.

Scalable integration architecture empowers chatbots to execute transactions, retrieve real-time data, and trigger automated processes, elevating them from conversational agents to fully functional digital assistants.


11. Deployment Channels: Web, Mobile, Voice Assistants

A chatbot must be deployable across multiple channels to meet users on their preferred platforms.

The web channel typically embeds the bot in a floating widget, an inline iframe, or a full-page interface within existing sites, leveraging HTML, CSS, and JavaScript for rapid iteration.

Developers implement WebSockets or Server-Sent Events to maintain low-latency, real-time streaming of messages, while fallbacks to long-polling guarantee compatibility with restrictive corporate firewalls.


Web deployments integrate cookie consent banners, Content Security Policy headers, and CAPTCHA challenges to satisfy privacy regulations and defend against automated abuse.

Responsive design frameworks such as Tailwind or Bootstrap ensure that the web client gracefully adapts to desktops, tablets, and small screens without fragmented codebases.

The mobile channel packages the chatbot into native iOS and Android apps or injects it into existing products through SDKs, providing access to device sensors, camera feeds, push notifications, and offline storage.


On-device caching layers store recent dialogue context and media assets, enabling intermittent connectivity support for users with unreliable networks.

Developers employ APNs and FCM for push notifications that re-engage customers, embedding contextual deep links that reopen the precise conversation thread on tap.

Biometric authentication gates sensitive actions—payments, personal data updates—by invoking Face ID, Touch ID, or Android BiometricPrompt, strengthening security without extra passwords.


Mobile telemetry captures screen flow, tap positions, and crash traces, sending anonymized analytics to platforms like Firebase or App Center for performance tuning.

The voice-assistant channel integrates with ecosystems such as Amazon Alexa, Google Assistant, or Apple Siri, exposing the chatbot as a custom skill or action invoked by wake words.

Developers define interaction models that map spoken intents to backend APIs, and supply sample utterances to train NLU engines hosted by the platform provider.

Voice designers craft concise prompts, progressive responses, and reprompts that respect speech pacing guidelines and minimize cognitive load during auditory exchanges.

Audio rendering relies on neural TTS voices selected for brand tone, with Speech Synthesis Markup Language tags adjusting pronunciation, emphasis, and pauses for clarity.

Latency remains critical in voice; edge caching of small responses and streaming partial synthesis reduce dead air and improve perceived responsiveness.

Privacy requirements demand opt-in voice data retention policies and on-device wake-word detection so that passive listening is limited to the smallest feasible footprint.

For channel parity, a consolidated conversation API abstracts transport details, letting developers update business logic once while exposing it uniformly across web, mobile, and voice.


Feature flags and channel capability negotiation dynamically enable or disable rich elements—buttons, carousels, file uploads—based on what each client supports.

Unified testing pipelines run end-to-end scripts through headless browsers, mobile device farms, and voice simulators, detecting regressions before rolling out to production.

Continuous deployment harnesses CI/CD workflows that bundle static assets, sign mobile binaries, and submit voice skill updates, automating releases while preserving rollback options.


Metrics dashboards aggregate channel-specific KPIs—widget open rate, app session length, voice skill invocation frequency—so product teams can prioritize enhancements where they matter most.

A well-orchestrated multi-channel strategy ensures the chatbot delivers consistent, high-quality experiences whether users type, tap, or talk.

bottom of page