Arch Howto
Arch is an intelligent Layer 7 gateway built on Envoy Proxy that provides secure handling, robust observability, and seamless integration of prompts with APIs for building fast, robust, and personalized AI agents.
View MoreHow to Use Arch
Install Prerequisites: Ensure you have Docker (v24), Docker compose (v2.29), Python (v3.10), and Poetry (v1.8.3) installed on your system. Poetry is needed for local development.
Create Python Virtual Environment: Create and activate a new Python virtual environment using: python -m venv venv && source venv/bin/activate (or venv\Scripts\activate on Windows)
Install Arch CLI: Install the Arch gateway CLI tool using pip: pip install archgw
Create Configuration File: Create a configuration file (e.g., arch_config.yaml) defining your LLM providers, prompt targets, endpoints, and other settings like system prompts and parameters
Configure LLM Providers: In the config file, set up your LLM providers (e.g., OpenAI) with appropriate access keys and model settings
Define Prompt Targets: Configure prompt targets in the config file, specifying endpoints, parameters, and descriptions for each target function
Set Up Endpoints: Define your application endpoints in the config file, including connection settings and timeouts
Initialize Client: Create an OpenAI client instance pointing to Arch gateway (e.g., base_url='http://127.0.0.1:12000/v1') in your application code
Make API Calls: Use the configured client to make API calls through Arch, which will handle routing, security, and observability
Monitor Performance: Use Arch's built-in observability features to monitor metrics, traces, and logs for your LLM interactions
Arch FAQs
Arch is an intelligent Layer 7 gateway designed to protect, observe, and personalize LLM applications with APIs. It's built on Envoy Proxy and engineered with purpose-built LLMs for secure handling, robust observability, and seamless integration of prompts with APIs.
View More