Skip to content

Cursor

Cursor can call CompactifAI through the same OpenAI-compatible chat completions API you use elsewhere: set your API key, point the OpenAI client at our base URL, and register the model IDs your account can use.

You need a CompactifAI API key and the model id for each model you plan to use (for example hypernova-60b or glm-5-1).

Click the Auto dropdown and turn off the Auto checkbox so you can manage which models are available.

Cursor model dropdown: disable Auto

Choose Add models to open Cursor Settings tab.

Click Add models

Turn off the built-in GPT models if you want Cursor to rely on CompactifAI and your custom entries instead.

Disable GPT models in the list

Open Cursor settings and go to API Keys.

Cursor settings: API Keys

Enable the OpenAI API key section so you can override the default OpenAI key inside Cursor. You can turn this off later; Cursor restores its prior behavior internally. You do not need to keep a backup of the old value.

Enable OpenAI API key

Click Enable OpenAI API Key (or the equivalent control) to edit key and endpoint settings.

Enable OpenAI API Key control

Paste your CompactifAI API key where the OpenAI key goes, and enable Override OpenAI Base URL. As with the API key, you do not need to keep a backup. If you turn the override off, Cursor reverts the value internally.

CompactifAI API key and Override OpenAI Base URL

Set OpenAI Base URL to your CompactifAI API root, for example:

  • CompactifAI Base URL: https://api.compactif.ai/v1

OpenAI Base URL set to CompactifAI

Click Add Custom Model so Cursor knows which model id to send to the chat completions API.

Add Custom Model

Add the model you have access to—for example hypernova-60b, glm-5-1, or gpt-oss-120b. ID must match the API model identifier from the models catalog.

Custom model id example

Close settings. Under Auto, you should see your CompactifAI models. You can leave Auto on or turn it off and select a CompactifAI model explicitly.

CompactifAI models under Auto

Start a chat using your configured model; completions should stream from CompactifAI through the same /v1/chat/completions flow described in Chat Completion.

Chat working with CompactifAI model

For request and response fields, see Chat Completion and the API reference.