Skip to main content

Migrate from Portkey

The Ferro Labs AI Gateway is a self-hosted alternative to Portkey. Both expose an OpenAI-compatible endpoint, so migration requires a base-URL change and a config-file rewrite. As of v1.0.0, the gateway is a stable release with semver guarantees โ€” your config format and API surface are locked in.

For a detailed comparison of Ferro Labs vs Portkey and other alternatives, see Why Ferro Labs.

What changesโ€‹

ConceptPortkeyFerro Labs AI Gateway
Provider credentialsPortkey dashboard virtual keysproviders block in config.yaml
Gateway request headerx-portkey-api-key + x-portkey-configStandard Authorization: Bearer header
Configs (routing rules)Portkey Config objects (JSON/dashboard)config.yaml strategy block
Fallbackstrategy: {mode: fallback} in Portkey Configstrategy: {mode: fallback}
Load balancingstrategy: {mode: loadbalance} + weightstrategy: {mode: loadbalance} + weight
Retriesretry: {attempts: N} per targetretry: {attempts: N} per target
CachingPortkey semantic cache (cloud)response-cache plugin (exact-match)
ObservabilityPortkey logs dashboardrequest-logger plugin + admin API
GuardrailsPortkey Guardrails (cloud)word-filter, max-token, rate-limit plugins (OSS)
Prompt templatesPortkey Prompt Libraryprompt_templates in config.yaml

Step 1 โ€” Start the gatewayโ€‹

docker run -d -p 8080:8080 \
-v $(pwd)/config.yaml:/config.yaml \
ghcr.io/ferro-labs/ai-gateway:latest

Step 2 โ€” Rewrite your configโ€‹

Portkey config (before)โ€‹

Portkey uses JSON config objects passed via header or dashboard:

{
"strategy": { "mode": "fallback" },
"targets": [
{ "virtual_key": "openai-key-abc", "weight": 1 },
{ "virtual_key": "anthropic-key-xyz", "weight": 1 }
],
"retry": { "attempts": 3, "on_status_codes": [429, 502, 503] }
}

Ferro Labs AI Gateway config (after)โ€‹

providers:
- key: openai
provider: openai
api_key: "${OPENAI_API_KEY}"

- key: anthropic
provider: anthropic
api_key: "${ANTHROPIC_API_KEY}"

strategy:
mode: fallback

targets:
- virtual_key: openai
retry:
attempts: 3
retry_on_status: [429, 502, 503, 504]
- virtual_key: anthropic
retry:
attempts: 2

Step 3 โ€” Update the base URL and headersโ€‹

# Before (Portkey)
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders

client = OpenAI(
api_key="dummy",
base_url=PORTKEY_GATEWAY_URL,
default_headers=createHeaders(
api_key="pcpk-your-key",
virtual_key="openai-key-abc",
),
)

# After (Ferro Labs AI Gateway)
from openai import OpenAI

client = OpenAI(
api_key="sk-your-openai-key",
base_url="http://localhost:8080",
)
# No extra headers needed โ€” credentials live in config.yaml

Model names are identical โ€” no changes to your messages or parameters needed.

Step 4 โ€” Migrate specific featuresโ€‹

Weighted load balancingโ€‹

# Portkey config:
# { "strategy": { "mode": "loadbalance" },
# "targets": [{"virtual_key": "oai", "weight": 7},
# {"virtual_key": "anth", "weight": 3}] }

# Ferro config.yaml:
strategy:
mode: loadbalance

targets:
- virtual_key: openai
weight: 7
- virtual_key: anthropic
weight: 3

Conditional routing by modelโ€‹

# Route by model prefix (Portkey calls this "router" in their docs)
strategy:
mode: conditional
conditions:
- key: model_prefix
value: gpt
target_key: openai
- key: model_prefix
value: claude
target_key: anthropic

targets:
- virtual_key: openai
- virtual_key: anthropic

Prompt templatesโ€‹

Portkey Prompt Library โ†’ Ferro Labs prompt_templates:

prompt_templates:
- id: support-agent
template: |
You are a helpful support agent for {{company}}.
Respond in {{language}}. Be concise.
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "How do I reset my password?"}],
extra_body={
"template_id": "support-agent",
"template_vars": {"company": "Acme Corp", "language": "English"},
},
)

Response cachingโ€‹

plugins:
- name: response-cache
type: transform
stage: before_request
enabled: true
config:
max_age: 300 # seconds
max_entries: 1000

Feature parity notesโ€‹

  • Portkey Hosted: You are self-hosting. See getting started for Docker and binary options.
  • Semantic caching: Portkey's vector-based semantic cache is not in the OSS gateway. The response-cache plugin uses exact-match (same model + messages hash).
  • Portkey Guardrails (PII, prompt injection): Available in Ferro Labs Managed. See OSS vs Ferro Labs Managed for the full comparison.
  • Portkey Analytics dashboard: Available in Ferro Labs Managed. The OSS gateway exposes structured logs via the request-logger plugin and a queryable admin API.