Skip to main content

OpenAI Proxy Server

CLI Tool to create a LLM Proxy Server to translate openai api calls to any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.) 100+ models Provider List.

Quick start

Call Huggingface models through your OpenAI proxy.

Start Proxy

$ pip install litellm
$ litellm --model huggingface/bigcode/starcoder

#INFO: Uvicorn running on http://0.0.0.0:8000

This will host a local proxy api at: http://0.0.0.0:8000

Test Proxy

Make a test ChatCompletion Request to your proxy

litellm --test http://0.0.0.0:8000

Other supported models:

$ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1

Jump to Code

Deploy Proxy

Deploy the proxy to https://api.litellm.ai

$ export ANTHROPIC_API_KEY=sk-ant-api03-1..
$ litellm --model claude-instant-1 --deploy

#INFO: Uvicorn running on https://api.litellm.ai/44508ad4

This will host a ChatCompletions API at: https://api.litellm.ai/44508ad4

Other supported models:

$ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1 --deploy

Test Deployed Proxy

Make a test ChatCompletion Request to your proxy

litellm --test https://api.litellm.ai/44508ad4

Setting api base, temperature, max tokens

litellm --model huggingface/bigcode/starcoder \
--api_base https://my-endpoint.huggingface.cloud \
--max_tokens 250 \
--temperature 0.5

Ollama example

$ litellm --model ollama/llama2 --api_base http://localhost:11434

Tutorial - using HuggingFace LLMs with aider

Aider is an AI pair programming in your terminal.

But it only accepts OpenAI API Calls.

In this tutorial we'll use Aider with WizardCoder (hosted on HF Inference Endpoints).

[NOTE]: To learn how to deploy a model on Huggingface

Step 1: Install aider and litellm

$ pip install aider-chat litellm

Step 2: Spin up local proxy

Save your huggingface api key in your local environment (can also do this via .env)

$ export HUGGINGFACE_API_KEY=my-huggingface-api-key

Point your local proxy to your model endpoint

$ litellm \
--model huggingface/WizardLM/WizardCoder-Python-34B-V1.0 \
--api_base https://my-endpoint.huggingface.com

This will host a local proxy api at: http://0.0.0.0:8000

Step 3: Replace openai api base in Aider

Aider lets you set the openai api base. So lets point it to our proxy instead.

$ aider --openai-api-base http://0.0.0.0:8000

And that's it!