Quick Start Guide
Overview
RouterLink is a one-stop large-scale model access network provided by WORLD3.
Developers only need one API key to call all mainstream AI models through a unified WORLD3 endpoint, including:
OpenAI Channel (GPT series)
Anthropic Channel (Claude series)
GCP Vertex (Gemini series)
World3 Router Asia Pacific
World3 Router North America
And more models to be integrated later.
RouterLink automatically implements:
Unified Interface Format
Automatic routing to the target provider
Multi-model invocation and transparent billing
Token-level Credits Tracking
Secure API Key Management
On the client side, your application only needs to communicate with RouterLink, instead of managing integrations with multiple vendor APIs.
API Key Management
1. Generate API Key
Step1: Go to the New WORLD3 โ RouterLink page.
Step2: Click "Generate API Key" or the "Generate" link below to generate an API Key.

Step3: After clicking, the following pop-up window will appear. When the API key is revealed, it is displayed only once, so please copy it immediately and store it securely. If the key is lost, it must be regenerated, so careful handling and management are strongly recommended.

Step4: You can view detailed API Key information in the table below.

2. Regenerate and Delete API Key
This system stores only one active API key. When an API key is regenerated or deleted, the previous key is permanently invalidated and cannot be recovered. Please proceed with caution.
Step1: Click the "Regenerate API Key" button on the page to regenerate the API Key, or click the "Delete" button below to delete the API Key.

Step2: Click the "Confirm" button to regenerate or delete the operation. If you need to generate it again later, simply execute Steps 1-3 in the Generate API Key process.


Available Models
1. Select Model
Step1: Under Available Models, you can select the model you need, or filter by searching.

Step2: After identifying the model you need, click to enter, where you can see the model's introduction, API Specification, and Integration Guide information.๏ผFor detailed information, please visit the page.๏ผ

Integrating RouterLink
This section will introduce how to integrate RouterLink into different development environments.
RouterLink provides a unified API gateway that enables developers to access and switch between multiple Large Language Models (LLMs) using a single API key. It supports both OpenAI-compatible APIs and native provider APIs (such as Anthropic Messages API), offering flexible integration options for different use cases.
OpenAI-compatible API is recommended for editors, plugins, and chat clients.
Native provider APIs are available for advanced usage and full feature access.
The examples below demonstrate several common integration scenarios.
1. Integrating RouterLink in VSCode
This section demonstrates how to integrate RouterLink into Visual Studio Code using the Continue plugin as an example.
The Continue plugin is an AI coding assistant extension that supports OpenAI-compatible APIs, allowing RouterLink models to be connected directly inside the editor for code completion, chat, and coding assistance.
Step1: Open VS Code software.
Step2: Click the Extensions button in the VS Code sidebar.

Step3: Search for the Continue plugin and click the "Install" button on the right to install it.

Step4: Click the "Continue" icon on the sidebar to launch.

Step5: Click the "settings" button in the upper right corner to navigate to the Configs list.
Step6: Click the "settings" icon to the right of Local Config to open the config.json file for configuration.๏ผIf you are opening a config.ymal file, please carefully review Step 7.๏ผ

Step7: Configure the corresponding model parameters, as shown in the image below. These parameters must be completely identical; otherwise, the model will not take effect. To configure multiple models, simply add them to this configuration file.In the apiKey section below, paste the API key you obtained in the previous step (Press Ctrl + F to find ${API_KEY}, then replace all), making sure it is enclosed in double quotation marks. (Here will display the configuration files for all models.).

If the configuration file is a JSON file, please fill it in according to the following format.
Step8: Save the configuration file, return to the chat interface, and select the model in the chat box to start chatting (like Gemini 3 Pro Preview).

Step9: On the Billing page, you can view the tokens consumed by this model and the amount of WAI Credits.

2. Integrating RouterLink in Cursor
The integration process in Cursor is similar to VS Code and also relies on the Continue plugin.
Step1: Open Cursor.
Step2: Search for "Continue" in the plugin search box, then click "Install" to install it.

Step3: Click the "drop-down" button in the top horizontal bar and select the Continue plugin.

Step4: Click the "settings" button in the upper right corner to navigate to the Configs list.
Step5: Click the "settings" icon to the right of Local Config to open the config.json file for configuration.

Step6: Similarly, configure the file by adding the model parameters you need to the configuration file and saving it. (This operation uses Claude Opus 4.5 as an example).

Step7: Return to the chat interface to verify the model's usability.

3. Self-deployment RouterLink
Step1: Go to the model details page and find the Integration Guide field.
Step2: Click the "Copy" button on the right under Integration Guide to copy the Endpoint and Python code (This operation uses GPT-5.2 as an example).

Step3: Open a program on your computer that can run Python code (e.g., Cursor, VS Code, PyCharm). This example uses PyCharm.
Step4: Paste the copied Endpoint and Python code into PyCharm.
Step5: Replace the content of the field api_key="<ROUTER_LINK_API_KEY>" with the previously generated API Key, as shown in the image below.

Step6: Right-click the screen and select the "Run filename" button to run the Python code.

Step7: View the output on the console.

Note: This operation is relatively simple and suitable for users with Python development experience. It is mainly used to quickly verify whether the RouterLink API is configured successfully, or to integrate it into existing backend services, scripts, or automated tasks.
View API Usage
You can visit the Routerlink page to view the total amount of Token Usage and Wai Credits consumed by calling this API key.

If you would like to view more detailed information, please visit the Billing page, which will show the cost breakdown for each model.

Alternatively, you can click the number below Token Usage to access the Token Details page for more detailed information.

Stay Connected with RouterLink
Follow @world3_ai to catch every announcement.
RouterLink is more than a productโit's a step toward an open, interoperable AI ecosystem where developers have choice, applications have resilience, and users have freedom.
Join us in building the future of AI access.
Links:
๐ Official Website: routerlink.world3.ai
๐ Litepaper: https://docs.world3.ai/world3/routerlink-litepaper
๐ Quick Start Guide: https://docs.world3.ai/world3/how-to-guides/routerlink/quick-start-guide
๐ค Genesis Program (Alpha): https://world3.ai/routerlink?tab=alphaprogram
Communities
๐ฆ Twitter: @world3_ai
๐ฌ Discord: discord.gg/world3
๐ข Telegram: t.me/world3_ai
Last updated