Developer Documentation

Integrate TalkAlly as a voice/text frontend for your backend system in under an hour

Quick Navigation

Overview

TalkAlly is a universal voice and text frontend that connects to any backend system via a simple JSON-based API. Your backend provides configuration and function endpoints — TalkAlly handles voice recognition, text-to-speech, and the mobile interface.

Key Benefit: Add a voice interface to your existing system without building a mobile app. TalkAlly users simply create a connection profile pointing to your backend URL.

What TalkAlly Provides

Feature Description
Voice Interface Real-time speech-to-text and text-to-speech via OpenAI Realtime API
Text Interface Chat-style text input with file attachments
Function Calling AI-driven execution of your backend functions via webhooks
Multi-Profile Users can connect to multiple backends (yours + others)
Local Functions Optional app-based functions (notes, contacts, navigation)

What You Provide

Component Description
Config Endpoint HTTP GET returning JSON with system prompt and function definitions
Function Endpoints HTTP POST webhooks that execute actions and return results
OpenAI API Key Your key (billed to you) or let users provide their own

Architecture

┌─────────────────────────────────────────────────────────────────┐ │ TalkAlly App │ │ (Voice + Text Frontend) │ └─────────────────────────────┬───────────────────────────────────┘ │ 1. GET /config │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ YOUR BACKEND │ │ (n8n, LangFlow, Flask, Lambda, etc.) │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ Config Endpoint │ │ GET /webhook/config │ │ │ │ Returns: { │ │ "system_prompt": "You are...", │ │ "functions": [...], │ │ "openai_api_key": "sk-..." │ │ } │ │ │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ Function Endpoints (called when AI decides) │ │ POST /webhook/weather │ │ POST /webhook/send_email │ │ POST /webhook/search_database │ │ │ └─────────────────────────────────────────────────────────────────┘

Request Flow

  1. Connect: User creates a connection profile in TalkAlly with your config URL
  2. Load Config: TalkAlly fetches your config endpoint (GET request)
  3. Start Session: TalkAlly connects to OpenAI with your system prompt and function definitions
  4. User Speaks/Types: Input is processed by OpenAI's AI model
  5. Function Call: When AI decides to call a function, TalkAlly POSTs to your webhook
  6. Response: Your webhook returns data, AI formulates response, TalkAlly speaks/displays it

API Contract

TalkAlly expects your backend to implement a simple HTTP-based protocol:

Endpoint Method Purpose
/config GET Returns JSON configuration (system prompt, functions, settings)
/function-name POST Executes a function and returns result
Protocol: Standard HTTP/HTTPS with JSON request/response bodies. Any platform that can serve HTTP endpoints can integrate with TalkAlly.

Config Endpoint

The config endpoint is the core of TalkAlly integration. It tells the app how to behave and what functions are available.

Request

GET https://your-backend.com/webhook/config?tenant=sales&passkey=secret123

Query Parameters (Optional)

Parameter Description
tenant Tenant/profile identifier for multi-tenant setups
passkey Optional authentication token
custom_greeting Override default greeting message
display_name User's display name for personalization

Response Schema

{
  "system_prompt": "You are a helpful assistant for Acme Corp...",
  "greeting": "Hello! How can I help you today?",
  "functions": [
    {
      "name": "function_name",
      "description": "What this function does",
      "webhook_url": "https://your-backend.com/webhook/function_name",
      "http_method": "POST",
      "acknowledgment": "Processing your request...",
      "parameters": {
        "type": "object",
        "properties": {
          "param1": {
            "type": "string",
            "description": "Description of param1"
          }
        },
        "required": ["param1"]
      }
    }
  ],
  "openai_api_key": "sk-...",
  "voice": "alloy",
  "voice_speed": 1.0,
  "enable_local_functions": true
}

Field Reference

Field Type Required Description
system_prompt string Required Instructions for the AI assistant's behavior
openai_api_key string Required OpenAI API key for voice/text processing
greeting string Optional Initial greeting spoken when session starts
functions array Optional List of callable functions
voice string Optional OpenAI voice: alloy, ash, ballad, coral, echo, sage, shimmer, verse
voice_speed number Optional Playback speed (0.5 to 2.0, default: 1.0)
enable_local_functions boolean Optional Enable app-based functions (notes, contacts, navigation)

Function Schema

Field Type Required Description
name string Required Unique function identifier (snake_case)
description string Required What the function does (AI uses this to decide when to call it)
webhook_url string Required Full URL to POST when function is called
acknowledgment string Optional Message spoken while function executes
parameters object Optional JSON Schema defining function parameters

Function Endpoints

When the AI decides to call a function, TalkAlly sends a POST request to the function's webhook_url.

Request

POST https://your-backend.com/webhook/get_weather
Content-Type: application/json

{
  "location": "Sydney"
}

Response

Return JSON with the function result. The AI will use this to formulate a response to the user.

{
  "temperature": 25,
  "conditions": "Sunny",
  "humidity": 65,
  "wind": "15 km/h NE"
}
Tip: Keep responses concise. The AI will speak the result, so avoid returning large data structures. Return human-readable summaries when possible.

Error Handling

{
  "error": "Location not found",
  "message": "Could not find weather data for the specified location."
}

Integration Examples

Minimal Config (Voice Only)

The simplest possible integration — just voice chat with no functions:

{
  "system_prompt": "You are a helpful assistant for Acme Corporation.",
  "greeting": "Hello! Welcome to Acme support.",
  "openai_api_key": "sk-your-api-key-here"
}

With Custom Functions

{
  "system_prompt": "You are a sales assistant. Help customers find products.",
  "greeting": "Hi! I can help you find products.",
  "openai_api_key": "sk-your-api-key-here",
  "voice": "nova",
  "voice_speed": 1.1,
  "functions": [
    {
      "name": "search_products",
      "description": "Search for products by name or category",
      "webhook_url": "https://api.example.com/search",
      "acknowledgment": "Searching our catalog...",
      "parameters": {
        "type": "object",
        "properties": {
          "query": {
            "type": "string",
            "description": "Product name or category"
          }
        },
        "required": ["query"]
      }
    }
  ]
}

Multi-Tenant Example

// Your backend pseudo-code
GET /webhook/config?tenant=XXX

if (tenant === "sales") {
  return {
    system_prompt: "You are a sales assistant...",
    functions: [/* sales functions */],
    openai_api_key: "sk-sales-key"
  }
}

if (tenant === "support") {
  return {
    system_prompt: "You are a support agent...",
    functions: [/* support functions */],
    openai_api_key: "sk-support-key"
  }
}

n8n Code Node

// n8n Code Node
const tenant = $input.query.tenant || 'default';
const displayName = $input.query.display_name || 'User';

return {
  json: {
    system_prompt: `You are a helpful assistant for ${displayName}.`,
    greeting: `Hello ${displayName}! How can I help?`,
    openai_api_key: "sk-your-key",
    functions: [
      {
        name: "get_weather",
        description: "Get weather for a location",
        webhook_url: "https://your-n8n.com/webhook/weather",
        parameters: {
          type: "object",
          properties: {
            location: { type: "string", description: "City name" }
          },
          required: ["location"]
        }
      }
    ]
  }
};

Python Flask

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/config', methods=['GET'])
def get_config():
    tenant = request.args.get('tenant', 'default')
    
    return jsonify({
        "system_prompt": f"You are assistant for {tenant} team.",
        "greeting": "Hello! How can I help?",
        "openai_api_key": "sk-your-key",
        "functions": [
            {
                "name": "get_data",
                "description": "Retrieve data from database",
                "webhook_url": "https://your-server.com/api/data",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "query": {"type": "string"}
                    }
                }
            }
        ]
    })

@app.route('/api/data', methods=['POST'])
def get_data():
    query = request.json.get('query', '')
    # Your logic here
    return jsonify({"result": f"Data for: {query}"})

Compatible Platforms

TalkAlly works with any platform that can serve HTTP endpoints returning JSON:

Workflow Automation

n8n

Open source workflow automation

LangFlow

Visual LLM workflow builder

Flowise

LangChain visual builder

Node-RED

Flow-based programming

Activepieces

Open source automation

Dify

AI application platform

Low-Code Platforms

Make

Visual automation platform

Zapier

Webhooks by Zapier

Power Automate

Microsoft HTTP triggers

Pipedream

Developer workflows

Serverless & Custom

AWS Lambda

+ API Gateway

Azure Functions

Microsoft serverless

Google Cloud Functions

GCP serverless

Flask / FastAPI

Python frameworks

Express / Node.js

JavaScript backend

Cloudflare Workers

Edge computing

Integration Time: Most platforms require only 30-60 minutes to set up a working TalkAlly integration.

Service Integrations

TalkAlly connects to popular services, giving you hands-free access to email, calendar, contacts, and files. Each integration has its own setup guide.

Google Workspace

Gmail, Calendar, Contacts, and Drive. Multi-account support with automatic token refresh.

View Guide →

Microsoft 365 Coming Soon

Outlook email, Calendar, and OneDrive integration for personal and work accounts.

Getting Started

  1. Create a config endpoint on your platform that returns the JSON schema above
  2. Test the endpoint by visiting it in a browser — you should see JSON
  3. Install TalkAlly from Google Play Store
  4. Add a Connection Profile → Choose "Business" mode
  5. Enter your config URL as the "Orchestrator URL"
  6. Test the connection — TalkAlly will fetch your config
  7. Connect and talk!
Need Help? Contact us at [email protected] for integration support.