Skip to content

AI Setup

NuxtBase includes an AI chat feature backed by the Vercel AI SDK.

At the template level, AI setup is intentionally small:

  • choose a provider
  • choose a model
  • provide the matching API key

The provider abstraction currently supports:

  • openai
  • openrouter

The example env file starts with:

Terminal window
NUXT_AI_PROVIDER=openai
NUXT_AI_MODEL=gpt-4o-mini
NUXT_OPENAI_API_KEY=sk-...

That means the default assumption is: use OpenAI unless you explicitly switch to OpenRouter.

If you are not enabling AI yet, clear the placeholder API keys after copying .env.example:

Terminal window
# NUXT_OPENAI_API_KEY=
# NUXT_OPENROUTER_API_KEY=

You can leave the provider and model defaults in place for now, but do not leave fake non-empty keys such as sk-... in the file. Those values look disabled to a human, but they are still non-empty strings in runtime config.

Use this when you want the simplest path:

Terminal window
NUXT_AI_PROVIDER=openai
NUXT_AI_MODEL=gpt-4o-mini
NUXT_OPENAI_API_KEY=sk-...

If NUXT_AI_PROVIDER is openai, the template builds the model through createOpenAI() and uses NUXT_OPENAI_API_KEY.

Official OpenAI references:

Practical setup steps:

  1. Sign in to the OpenAI API platform.
  2. Open your API project and go to the API keys page.
  3. Create a new secret key and copy it immediately.
  4. Paste that value into NUXT_OPENAI_API_KEY in .env.
  5. Restart your local dev server.

For a first key on a new OpenAI API account, OpenAI may require phone verification before the key can be created.

Use this if you want model routing or access to models through OpenRouter:

Terminal window
NUXT_AI_PROVIDER=openrouter
NUXT_AI_MODEL=openai/gpt-4o-mini
NUXT_OPENROUTER_API_KEY=sk-or-...

If NUXT_AI_PROVIDER is openrouter, the template builds the model through createOpenRouter() and uses NUXT_OPENROUTER_API_KEY.

Official OpenRouter references:

Practical setup steps:

  1. Sign in to OpenRouter.
  2. If you plan to use paid models, add credits to the account you will test with.
  3. Open the API key settings page and create a new API key.
  4. Give the key a clear label and optionally set a credit limit.
  5. Paste that value into NUXT_OPENROUTER_API_KEY in .env.
  6. Restart your local dev server.

If you only want to test the wiring first, start with one inexpensive model and verify the app boots cleanly before you experiment with more expensive routing options.

If you are postponing AI, remove the placeholder API keys and stop here.

AI setup is optional for initial app boot, but if you do enable it, use a real provider key and a model name that matches that provider.

  1. get the app working without AI first
  2. choose openai or openrouter
  3. set the provider, model, and matching key
  4. start the app and confirm configuration errors are gone
  5. move on to the AI feature docs when you want to test the full in-app behavior

For the setup layer, the first useful check is small:

  1. set the provider, model, and matching API key
  2. restart the app
  3. confirm startup does not fail because of missing or malformed AI config

If you want to verify the actual chat flow, credits behavior, wallet state, or in-app usage rules, cover that in the later features/ai-chat documentation rather than here.

For provider setup errors, check these in order:

  1. wrong provider key
  2. wrong model name
  3. provider and model do not match each other