Providers

Table of contents

  1. Providers
    1. Built-in Providers
      1. 1. DeepSeek
      2. 2. GitHub AI
      3. 3. Moonshot
      4. 4. OpenRouter
      5. 5. Qwen (Alibaba Cloud)
      6. 6. SiliconFlow
      7. 7. Tencent Hunyuan
      8. 8. BigModel
      9. 9. Volcengine
      10. 10. OpenAI
      11. 11. Anthropic Claude
      12. 12. Google Gemini
      13. 13. Ollama
      14. 14. LongCat
      15. 15. CherryIN
      16. 16. Yuanjing
    2. Provider Selection
      1. Using Configuration
      2. Using Picker
      3. Using Model Picker
    3. Protocols
      1. OpenAI Protocol (Default)
      2. Anthropic Protocol
      3. Gemini Protocol
    4. Custom Providers
      1. Creating a Custom Provider
      2. Required Functions
      3. Optional Fields
      4. Using Custom Provider
    5. Custom Protocols
      1. Protocol Functions
    6. API Key Configuration
      1. Single Provider
      2. Multiple Providers
      3. Environment Variables
    7. Switching Providers
      1. Method 1: Configuration
      2. Method 2: Picker
      3. Method 3: Model Picker
    8. Provider-Specific Notes
      1. DeepSeek
      2. OpenAI
      3. Anthropic
      4. Google Gemini
      5. Ollama
    9. Troubleshooting
      1. API Key Issues
      2. Provider Not Found
      3. Protocol Errors
    10. Next Steps

chat.nvim uses a two-layer architecture for AI service integration:

  • Providers: Handle HTTP requests to specific AI services (DeepSeek, OpenAI, GitHub, etc.)
  • Protocols: Parse API responses from different AI services (OpenAI, Anthropic, etc.)

Most AI services use OpenAI-compatible APIs, so the default protocol is openai. Providers can specify a custom protocol via the protocol field if needed.


Built-in Providers

chat.nvim comes with built-in support for 16+ AI providers:

1. DeepSeek

DeepSeek AI

provider = 'deepseek'
model = 'deepseek-chat'  -- or 'deepseek-coder'

Available Models:

  • deepseek-chat - General purpose chat model
  • deepseek-coder - Code-specialized model

2. GitHub AI

GitHub AI

provider = 'github'
model = 'gpt-4o'  -- or other GitHub models

Configuration:

api_key = {
  github = 'github_pat_xxxxxxxx',  -- GitHub Personal Access Token
}

3. Moonshot

Moonshot AI

provider = 'moonshot'
model = 'moonshot-v1-8k'

Available Models:

  • moonshot-v1-8k - 8K context window
  • moonshot-v1-32k - 32K context window
  • moonshot-v1-128k - 128K context window

4. OpenRouter

OpenRouter

provider = 'openrouter'
model = 'openai/gpt-4-turbo'  -- Access multiple models through OpenRouter

Configuration:

api_key = {
  openrouter = 'sk-or-xxxxxxxx',
}

5. Qwen (Alibaba Cloud)

Alibaba Cloud Qwen

provider = 'qwen'
model = 'qwen-turbo'

Available Models:

  • qwen-turbo - Fast model
  • qwen-plus - Balanced model
  • qwen-max - Most capable model

6. SiliconFlow

SiliconFlow

provider = 'siliconflow'
model = 'Qwen/Qwen2.5-7B-Instruct'

Configuration:

api_key = {
  siliconflow = 'xxxxxxxx-xxxx-xxxx',
}

7. Tencent Hunyuan

Tencent Hunyuan

provider = 'tencent'
model = 'hunyuan-lite'

Available Models:

  • hunyuan-lite - Lite version
  • hunyuan-standard - Standard version
  • hunyuan-pro - Pro version

8. BigModel

BigModel AI

provider = 'bigmodel'
model = 'glm-4'

Configuration:

api_key = {
  bigmodel = 'xxxxxxxx-xxxx-xxxx',
}

9. Volcengine

Volcengine AI

provider = 'volcengine'
model = 'doubao-pro-4k'

Configuration:

api_key = {
  volcengine = 'xxxxxxxx-xxxx-xxxx',
}

10. OpenAI

OpenAI

provider = 'openai'
model = 'gpt-4o'  -- or 'gpt-4-turbo', 'gpt-3.5-turbo'

Available Models:

  • gpt-4o - Latest GPT-4 Omni
  • gpt-4-turbo - GPT-4 Turbo
  • gpt-3.5-turbo - GPT-3.5 Turbo

11. Anthropic Claude

Anthropic Claude

provider = 'anthropic'
model = 'claude-3-5-sonnet-20241022'

Available Models:

  • claude-3-5-sonnet-20241022 - Latest Claude 3.5 Sonnet
  • claude-3-opus-20240229 - Claude 3 Opus
  • claude-3-haiku-20240307 - Claude 3 Haiku

Anthropic uses a different protocol (anthropic) instead of the default OpenAI protocol.

12. Google Gemini

Google Gemini

provider = 'gemini'
model = 'gemini-1.5-flash'

Available Models:

  • gemini-1.5-flash - Fast model
  • gemini-1.5-pro - Most capable model

Gemini uses a different protocol (gemini) instead of the default OpenAI protocol.

13. Ollama

Ollama

provider = 'ollama'
model = 'llama2'  -- or any locally installed model

Setup:

  1. Install Ollama: https://ollama.ai/
  2. Pull a model: ollama pull llama2
  3. Ollama runs locally, no API key required

14. LongCat

LongCat AI

provider = 'longcat'
model = 'longcat-chat'

Configuration:

api_key = {
  longcat = 'lc-xxxxxxxxxxxx',
}

15. CherryIN

CherryIN AI

provider = 'cherryin'
model = 'cherryin-chat'

Configuration:

api_key = {
  cherryin = 'sk-xxxxxxxxxxxx',
}

16. Yuanjing

Yuanjing AI

provider = 'yuanjing'
model = 'yuanjing-chat'

Provider Selection

Using Configuration

Set default provider in your configuration:

require('chat').setup({
  provider = 'deepseek',
  model = 'deepseek-chat',
  api_key = {
    deepseek = 'sk-xxxxxxxxxxxx',
  },
})

Using Picker

Switch providers dynamically using the picker:

:Picker chat_provider
" or use the keybinding
<Leader>fp

Using Model Picker

Select a model for the current provider:

:Picker chat_model
" or use the keybinding
<Leader>fm

Protocols

Protocols handle parsing of API responses. chat.nvim supports multiple protocols:

OpenAI Protocol (Default)

Most AI services use OpenAI-compatible API format. This is the default protocol for all built-in providers.

Response Format:

{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Response text"
      }
    }
  ]
}

Anthropic Protocol

Used by Anthropic Claude. The anthropic provider automatically uses this protocol.

Response Format:

{
  "content": [
    {
      "type": "text",
      "text": "Response text"
    }
  ]
}

Gemini Protocol

Used by Google Gemini. The gemini provider automatically uses this protocol.

Response Format:

{
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "text": "Response text"
          }
        ]
      }
    }
  ]
}

Custom Providers

You can create custom providers for AI services not in the built-in list.

Creating a Custom Provider

Create a file at ~/.config/nvim/lua/chat/providers/<provider_name>.lua:

-- ~/.config/nvim/lua/chat/providers/my_provider.lua
local M = {}
local job = require('job')
local sessions = require('chat.sessions')
local config = require('chat.config')

function M.available_models()
  return {
    'model-1',
    'model-2',
    'model-3',
  }
end

function M.request(opt)
  local cmd = {
    'curl',
    '-s',
    'https://api.example.com/v1/chat/completions',
    '-H',
    'Content-Type: application/json',
    '-H',
    'Authorization: Bearer ' .. config.config.api_key.my_provider,
    '-X',
    'POST',
    '-d',
    '@-',
  }

  local body = vim.json.encode({
    model = sessions.get_session_model(opt.session),
    messages = opt.messages,
    stream = true,
    stream_options = { include_usage = true },
    tools = require('chat.tools').available_tools(),
  })

  local jobid = job.start(cmd, {
    on_stdout = opt.on_stdout,
    on_stderr = opt.on_stderr,
    on_exit = opt.on_exit,
  })
  job.send(jobid, body)
  job.send(jobid, nil)
  sessions.set_session_jobid(opt.session, jobid)

  return jobid
end

-- Optional: specify custom protocol (defaults to 'openai')
-- M.protocol = 'anthropic'

return M

Required Functions

A provider module must implement:

  1. available_models() - Return a list of available model names
  2. request(opt) - Send HTTP request and return job ID

Optional Fields

  • protocol - Specify which protocol to use (default: openai)

Using Custom Provider

After creating the provider file, configure it in your setup:

require('chat').setup({
  provider = 'my_provider',
  model = 'model-1',
  api_key = {
    my_provider = 'your-api-key-here',
  },
})

Custom Protocols

If you need a custom protocol, create a file at ~/.config/nvim/lua/chat/protocols/<protocol_name>.lua:

-- ~/.config/nvim/lua/chat/protocols/my_protocol.lua
local M = {}

function M.on_stdout(id, data)
  -- Parse stdout data from curl
  -- Call require('chat.session').append_stream(id, content)
end

function M.on_stderr(id, data)
  -- Handle stderr data
end

function M.on_exit(id, code, signal)
  -- Handle request completion
  -- Call require('chat.session').complete_stream(id)
end

return M

Protocol Functions

  • on_stdout(id, data) - Handle stdout data from curl
  • on_stderr(id, data) - Handle stderr data
  • on_exit(id, code, signal) - Handle request completion

See lua/chat/protocol/openai.lua for reference implementation.


API Key Configuration

Single Provider

require('chat').setup({
  provider = 'deepseek',
  api_key = {
    deepseek = 'sk-xxxxxxxxxxxx',
  },
})

Multiple Providers

require('chat').setup({
  provider = 'deepseek',  -- Default provider
  api_key = {
    deepseek = 'sk-xxxxxxxxxxxx',
    github = 'github_pat_xxxxxxxx',
    openai = 'sk-xxxxxxxxxxxx',
    anthropic = 'sk-ant-xxxxxxxxxxxx',
  },
})

Environment Variables

You can also use environment variables:

require('chat').setup({
  api_key = {
    deepseek = os.getenv('DEEPSEEK_API_KEY'),
    openai = os.getenv('OPENAI_API_KEY'),
  },
})

Switching Providers

Method 1: Configuration

Change the default provider in configuration:

require('chat').setup({
  provider = 'openai',
  model = 'gpt-4o',
})

Method 2: Picker

Use the picker to switch providers interactively:

:Picker chat_provider

Select the provider you want to use, and it will be applied to the current session.

Method 3: Model Picker

Use the model picker to select a model for the current provider:

:Picker chat_model

This will show all available models for the current provider.


Provider-Specific Notes

DeepSeek

  • Default model: deepseek-chat
  • API Base: https://api.deepseek.com
  • Supports: Streaming, function calling

OpenAI

  • Default model: gpt-4o
  • API Base: https://api.openai.com
  • Supports: Streaming, function calling, vision

Anthropic

  • Default model: claude-3-5-sonnet-20241022
  • API Base: https://api.anthropic.com
  • Protocol: Uses anthropic protocol (not OpenAI-compatible)
  • Supports: Streaming, function calling

Google Gemini

  • Default model: gemini-1.5-flash
  • API Base: https://generativelanguage.googleapis.com
  • Protocol: Uses gemini protocol (not OpenAI-compatible)
  • Supports: Streaming, function calling, vision

Ollama

  • Default model: llama2
  • API Base: http://localhost:11434
  • No API key required: Runs locally
  • Supports: Streaming, function calling

Troubleshooting

API Key Issues

Make sure your API key is correct and has the necessary permissions.

Test your API key:

# DeepSeek
curl https://api.deepseek.com/v1/models \
  -H "Authorization: Bearer sk-xxxxxxxxxxxx"

# OpenAI
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer sk-xxxxxxxxxxxx"

Provider Not Found

If you get “Provider not found” error:

  1. Check the provider name is correct
  2. Ensure the provider file exists in lua/chat/providers/
  3. Verify the provider module returns the correct functions

Protocol Errors

If you get protocol-related errors:

  1. Check if the provider uses a custom protocol
  2. Ensure the protocol file exists in lua/chat/protocols/
  3. Verify the protocol module implements all required functions

Next Steps