Skip to content

Workflow Overview

When a client sends a request to AIClient-2-API, the general workflow is as follows:

  1. Request Reception: api-server.js receives the client's request.
  2. Auth & Protocol Identification: The system validates the API key, and api-manager.js identifies the request protocol (OpenAI/Gemini/Claude).
  3. Request Conversion (Converter): ConverterFactory converts the request body into a unified internal format based on the identified protocol.
  4. Routing & Provider Scheduling: In auto mode, the system selects the optimal provider pool from ProviderPoolManager based on the requested model name.
  5. Adapter Execution (Adapter): src/providers/adapter.js calls the corresponding Core module (e.g., gemini-core.js) to communicate with the actual LLM API.
  6. Response Conversion (Converter): Once the LLM returns a native response, the converter transforms it back into the protocol format expected by the client.
  7. Plugin Hook Triggering: Before and after sending the response, the system triggers plugin hooks (e.g., onStreamChunk), allowing custom logic to intercept and process data.
  8. Response Sending: The final result is sent back to the client.

Through this layered and modular design, AIClient-2-API achieves a high degree of flexibility and maintainability, allowing for easy expansion to support new LLMs or API formats.