Skip to content

Storage Setup

NuxtBase supports three storage providers:

  • local
  • s3
  • r2

For most buyers, the correct first choice is local. It removes infrastructure friction while you validate the product, auth, billing, and admin flows.

The default .env.example already uses:

Terminal window
NUXT_STORAGE_PROVIDER=local

When storage is local, uploaded files are written under:

public/uploads/

and public URLs are returned like:

/uploads/{key}

This is enough for local development and single-instance staging.

If you deploy NuxtBase to Vercel and keep NUXT_STORAGE_PROVIDER=local, you should assume uploaded files can disappear.

Why:

  • local uploads are written to the function filesystem
  • Vercel runtime storage is not durable application storage
  • new deployments, new function instances, or scaling events do not guarantee that previously written local files are still available

So the practical rule is:

  • local is fine for local development
  • local can be acceptable for short-lived internal testing if you understand the risk
  • local is not the right choice for Vercel production if uploaded files must persist

If you want persistent uploads on Vercel, move to object storage such as s3 or r2.

Official Vercel references:

Storage is not just a generic service layer. In the template, uploads also create fileAsset records in the database so the app can track:

  • provider
  • provider key
  • file name
  • MIME type
  • file size
  • public URL
  • upload status

That means storage setup affects both file delivery and database-backed product flows.

Use s3 or r2 when you need one or more of these:

  • multiple app instances
  • persistent shared uploads across deploys
  • CDN-backed public asset delivery
  • object storage outside the app server filesystem

Set the provider first:

Terminal window
NUXT_STORAGE_PROVIDER=s3

or:

Terminal window
NUXT_STORAGE_PROVIDER=r2

Then add all required variables:

Terminal window
S3_BUCKET=...
S3_REGION=...
S3_ENDPOINT=...
S3_ACCESS_KEY_ID=...
S3_SECRET_ACCESS_KEY=...
S3_PUBLIC_URL=https://cdn.example.com

Official AWS references:

Practical setup steps:

  1. Create an S3 bucket in the AWS region you want to use.
  2. Create an IAM user or IAM role with the minimum S3 permissions your app needs.
  3. Generate programmatic credentials for that IAM identity.
  4. Decide what public base URL your app should return for uploaded files.
  5. Copy the final values into .env.

Typical mapping:

Terminal window
NUXT_STORAGE_PROVIDER=s3
S3_BUCKET=your-bucket-name
S3_REGION=us-east-1
S3_ENDPOINT=https://s3.us-east-1.amazonaws.com
S3_ACCESS_KEY_ID=...
S3_SECRET_ACCESS_KEY=...
S3_PUBLIC_URL=https://your-public-file-domain.example.com

For S3_PUBLIC_URL, use the URL your browser should actually load:

  • a CloudFront domain
  • a custom CDN domain
  • or a direct public bucket URL if that matches your setup

NuxtBase supports r2 for Cloudflare object storage. In this setup, use Cloudflare R2 for uploaded files.

Official Cloudflare references:

Practical setup steps:

  1. Create an R2 bucket in Cloudflare.
  2. Create an R2 API token and copy the generated Access Key ID and Secret Access Key.
  3. Copy the bucket’s S3 API endpoint.
  4. Decide how uploaded files should be publicly served.
  5. Copy the final values into .env.

Typical mapping:

Terminal window
NUXT_STORAGE_PROVIDER=r2
S3_BUCKET=your-bucket-name
S3_REGION=auto
S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
S3_ACCESS_KEY_ID=...
S3_SECRET_ACCESS_KEY=...
S3_PUBLIC_URL=https://assets.example.com

For production, Cloudflare recommends using a custom domain for public bucket access instead of the r2.dev development URL.

That means:

  • S3_ENDPOINT should be the bucket’s S3 API endpoint used by the app to talk to R2
  • S3_PUBLIC_URL should be the public custom domain that browsers use to load files

The environment validation enforces all six variables whenever the provider is s3 or r2.

That is deliberate:

  • the storage client needs bucket, endpoint, and credentials
  • public file URLs need S3_PUBLIC_URL
  • partial configuration is more dangerous than no configuration

If one of these is missing, startup fails early.

  • files are written to disk
  • public URLs come from /uploads/...
  • easiest for development
  • files are uploaded through the AWS SDK compatible client
  • public URLs are derived from S3_PUBLIC_URL
  • deletes attempt to remove the object from the bucket
  • presigned upload URLs can be generated
  1. stay on local until the core app is working
  2. test uploads from the UI
  3. confirm files appear under public/uploads/
  4. only then switch to s3 or r2
  5. verify that uploaded files resolve through your configured public URL

Using a Private Bucket URL as S3_PUBLIC_URL

Section titled “Using a Private Bucket URL as S3_PUBLIC_URL”

S3_PUBLIC_URL should be the URL your browser can actually load, usually a CDN or public bucket base URL.

Forgetting That Local Uploads Live on Disk

Section titled “Forgetting That Local Uploads Live on Disk”

If you redeploy or replace the filesystem, local uploads may disappear. That is expected behavior for local storage, not a template bug.

If NUXT_STORAGE_PROVIDER=local, the app will not use S3_PUBLIC_URL.

If NUXT_STORAGE_PROVIDER=s3 or r2, the app expects object-storage style URLs, not /uploads/....