The verifiable LLM
gateway for AI

Relay requests to verified LLM models endpoints with built-in privacy protections

All leading models in one LLM API

No more model switching, get full access to models from OpenAI, DeepSeek, Anthropic, Google, Meta, and more.

GPT-5.2

GPT-5.2

GPT-5.2 is OpenAI's capable and refined model, built to excel in professional knowledge work, complex reasoning, multimodal understanding, and agentic workflows. It sets new state-of-the-art scores across many frontier benchmarks, including GDPval (74.1%), SWE-Bench Verified (80.0%), GPQA Diamond (92.4%), and ARC-AGI-2 (52.9%), often outperforming industry experts at real-world tasks.

GPT-5.1

GPT-5.1

GPT-5.1 is an upgrade to the GPT-5 family, designed to be more intelligent, more conversational, and easier to personalize. It improves on GPT-5 in both capability and communication style. GPT-5.1 adapts its reasoning effort dynamically, responding quickly to simple requests while thinking more deeply on complex ones. It also integrates refined steering controls so users can customize ChatGPT's tone and personality with ease.

GPT-5

GPT-5

GPT-5 is OpenAI's intelligent and capable model that delivers frontier performance in coding, math, writing, health, visual understanding, and real-world reasoning. Designed as a unified system, GPT-5 adapts its depth of reasoning to each task. It provides fast, lightweight responses when appropriate, and invoking extended “thinking” for complex, high-stakes problems.

GPT-5 Mini

GPT-5 Mini

GPT-5 Mini delivers a strong balance between cost, speed, and capability. Optimized for low-latency interactions and mid-range reasoning, it's well suited for chatbots, assistants, or applications that demand smart responses without the compute overhead of GPT-5. While it trades off some depth and general knowledge, it retains core reasoning skills and instruction-following, making it a cost-effective option for many production deployments.

GPT-5 Nano

GPT-5 Nano

GPT-5 Nano is the lightest and fastest model in the GPT-5 family, built for high-throughput use cases like classification, simple instruction-following, and structured outputs. It prioritizes speed and low cost, making it ideal for routing, pre-processing, or embedded AI where latency and efficiency matter more than nuanced reasoning.

o4-mini

o4-mini

o4-mini is a compact, high-efficiency language model optimized for fast, low-cost performance across reasoning, math, coding, and visual understanding tasks. Despite its smaller size, it delivers strong result, making it well-suited for lightweight deployments that demand intelligent task execution. o4-mini is ideal for developers building responsive AI systems, cost-aware applications, and tool-augmented workflows requiring reliable analytical capabilities.

GPT-4.1

GPT-4.1

GPT-4.1 is the flagship model in OpenAI's GPT series. Accessible via the developer API, it demonstrates significant improvements in coding, instruction-following, and multimodal comprehension. GPT-4.1 is suitable for advanced problem-solving scenarios, software engineering tasks, extensive document analysis, and other tasks requiring extended reasoning and analytical depth.

GPT-4.1 Mini

GPT-4.1 Mini

GPT-4.1 Mini is a midsized variant of the GPT-4.1 family, providing a balanced combination of intelligence, speed, and cost efficiency. It demonstrates strong performance across multiple benchmarks, particularly in coding accuracy, instruction-following, and moderate-to-complex conversational tasks. GPT-4.1 Mini is well-suited for scalable deployments that prioritize high-quality outputs with moderate latency and controlled costs.

GPT-4.1 Nano

GPT-4.1 Nano

GPT-4.1 Nano is the smallest model in the GPT-4.1 series, optimized specifically for rapid and efficient task execution. It excels in quick classification tasks, short-form text completions, and streamlined coding scenarios, maintaining robust performance despite its compact design. GPT-4.1 Nano efficiently handles tasks requiring low latency, affordability, and efficient multimodal comprehension within a large context window.

GPT-4.5 Preview

GPT-4.5 Preview

GPT-4.5 Preview is OpenAI's latest general-purpose AI model optimized for sophisticated conversational interactions and contextually rich problem-solving. The model supports applications requiring interactive dialogues, detailed analytical reasoning, in-depth question answering, and context-sensitive content creation.

GPT-4o

GPT-4o

GPT-4o is OpenAI's flagship AI model, offering comprehensive natural language processing with advanced reasoning and general-purpose capabilities. GPT-4o is suitable for a wide range of applications including detailed content generation, extensive linguistic tasks, complex analytics, and general automation of language-driven workflows.

GPT-4o Mini

GPT-4o Mini

GPT-4o Mini is a streamlined variant of GPT-4o optimized for efficiency, reduced latency, and cost-sensitive conversational tasks. It supports short-form dialogue interactions, concise information summarization, brief conversational exchanges, and tasks prioritizing speed and contextual accuracy.

o3-mini

o3-mini

o3-Mini is OpenAI's compact AI model specifically designed for structured logical reasoning and analytical tasks. It effectively handles structured inference, decision-making scenarios, logic-driven analytics, structured data processing, and tasks requiring precise, structured logical clarity.

o1

o1

o1 is an AI model series specifically trained for complex reasoning scenarios involving extended internal chains of thought. It supports detailed multi-step reasoning workflows, comprehensive logical inference, extended problem decomposition, rigorous analytical exploration, and long-context reasoning tasks.

o1-mini

o1-mini

o1-Mini is a streamlined, efficient variant of the o1 model, optimized for quicker logical reasoning tasks at reduced cost and latency. The model is ideal for rapid logical inference, concise analytical problem-solving, efficient reasoning workflows, and structured reasoning tasks requiring fast but precise analytical performance.

Get more done with 1RPC.ai

All your favourite LLMs, all at once.

A multi-model chatbox with verified LLMs

Verifiable execution with secure enclaves

Verifiable responses

Chat with trusted AI models through their official endpoints to ensure authenticity and accuracy.

Secure execution

Run your requests through hardware-isolated relays that prevent tampering or unauthorized access.

Privacy by design

Protect your privacy with zero-tracking infrastructure that prevents metadata leakage.

Flexible pricing

Avoid provider lock-in with flexible access and pay only per prompt, with no hidden costs.

Multi-model comparison

Compare responses from multiple models in real time to quickly identify the best fit for your needs.

Unified interface

Access multiple AI providers through a single, consistent interface without switching tools or credentials.

ChatGPT

Official model endpoint

Reliable model routing

Built for developers

Our relay ensures every request goes directly to the intended model, with no redirection or spoofing. Verifying endpoint authenticity makes it easier to reproduce results, audit behavior, and build reliably in public.

Endpoint-level clarity

Every call is tied to the exact model and version that produced it which maintains trust across user-facing workflows.

Plug into prompt pipelines

Integrate with eval stacks and CI workflows that depend on consistent model behavior.

Zero-tracking infra

Chat privately, no setup required

Requests are relayed in isolated environments that prevent external access or interference. This ensures that sensitive metadata, like prompts, user IDs, or session context, stays private and can't be inspected, logged, or leaked at any point during execution.

Zero relay visibility

Execution runs inside hardware-backed enclaves, so the relay can't access or log your request.

Third-party safe

Integrations never leak sensitive context or allow upstream data inspection.

1RPC.ai

1RPC.ai

TEE-attested relay

DeepSeek

DeepSeek

Official model endpoint

1RPC.ai
OpenAI
Claude
DeepSeek
Gemini
Meta
OpenAI
Claude
DeepSeek
Gemini
Meta
Work across models

Everything (AI) in one place

Interact with different AI models from one unified interface, which means no context switching and no extra setup. See responses from multiple providers side by side to evaluate output quality, behaviour, and suitability for your use case. Make faster, smarter decisions without the overhead.

Model-agnostic by design

Connect to multiple AI providers through a consistent interface that requires no extra setup.

Built to plug into your stack

Easily integrate with your data sources, tools, and APIs to enrich prompts and responses.

FAQs

Questions? We got answers

Fast help for setting up AI endpoints and making requests.

Sign up for a free account at 1RPC.ai and navigate to your dashboard. You can generate an API key from the API Keys section. The key can be used immediately to make requests to any supported AI model.

Requests can be relayed to the endpoint via Python, TypeScript, and Shell.

1RPC.ai has built-in privacy protections that uses minimal metadata for routing and is wiped after relaying. All requests are signed by a Trusted Execution Environment (TEE) Relay that can be verified on the blockchain for accountability.

All models supported by 1RPC.ai can be accessed via the multi-model chatbox. Users can select multiple LLMs from the dropdown and receive responses from LLMs with just one prompt.