Quickstart
Start Nitejar in one command, then inspect receipts and logs.
Nitejar is a self-hosted AI agent runtime. It receives messages from channels (Telegram, GitHub, Slack, Discord, generic webhooks), routes them to configured agents, runs inference with tool access inside sandboxed environments, and delivers responses back to the source.
The app is organized around a few clear surfaces:
- Command Center for urgent attention
- Company for structure, staffing, and reusable role defaults
- Work for goals, tickets, and heartbeats
- Activity for receipts and run traces
- Costs for the spend ledger
One command
npx --yes @nitejar/cli@latest upUse this command when @nitejar/cli is published to npm.
This does three things:
- Downloads the latest runtime bundle for your platform.
- Runs database migrations (and legacy plugin-instance cutover when needed).
- Starts the Nitejar daemon.
By default, local state is stored at ~/.nitejar:
~/.nitejar/data/nitejar.db-- SQLite database~/.nitejar/config/env-- persisted runtime config~/.nitejar/logs/server.log-- server logs~/.nitejar/receipts/migrations/*.json-- migration receipts
Open localhost:3000. You're running.
If port 3000 is already in use, run:
npx --yes @nitejar/cli@latest up --port autoThe CLI prints the selected URL and log file path.
Verify and operate
npx --yes @nitejar/cli@latest status
npx --yes @nitejar/cli@latest logs --follow
npx --yes @nitejar/cli@latest downThe first command shows runtime status, active version, database path, and last migration receipt.
Testing locally before npm publish
If you're testing the CLI from this repo before publish, do not use npx @nitejar/cli up.
Local source run (fastest)
pnpm nitejar:local:up -- --data-dir /tmp/nitejar-local --port autoThis command starts a temporary local release server, runs up, then shuts the temporary server down.
By default it uses a fixture runtime for installer smoke testing.
To test with the real built web runtime:
pnpm nitejar:local:up -- --full-runtime --data-dir /tmp/nitejar-local --port autoPackage-level test (pack)
pnpm --filter @nitejar/cli build
cd packages/nitejar-cli
PKG_TGZ="$(npm pack --silent)"
cd ../..
npx --yes --package "./packages/nitejar-cli/${PKG_TGZ}" nitejar up --data-dir /tmp/nitejar-pack --port 3002This validates the same packaged artifact shape users get from npm.
Prerequisites
- Node.js 20 or later
- A model API key configured either in Settings > Gateway or via
OPENROUTER_API_KEY/OPENAI_API_KEY
Optional:
- A Sprites token for sandboxed agent execution
- A Telegram bot token from @BotFather for Telegram integration
- A GitHub App for GitHub integration
- Docker for running Postgres (SQLite is the default and works without Docker)
From source (advanced)
If you're contributing to Nitejar or want direct source control, use the workspace flow:
1. Clone and install
git clone https://github.com/nitejar/nitejar.git
cd nitejar
pnpm install2. Configure environment
Copy the example environment file and fill in the values you need:
cp .env.example apps/web/.envOpen apps/web/.env and set at minimum:
# Generate an encryption key for storing secrets
ENCRYPTION_KEY=$(node -e "console.log(require('crypto').randomBytes(32).toString('hex'))")3. Run migrations and start
pnpm --filter @nitejar/database db:migrate
pnpm devFor the full list of environment variables, see Configuration.
Docker
docker run -d \
--name nitejar \
-p 3000:3000 \
-v nitejar-data:/app/data \
-e ENCRYPTION_KEY="$(openssl rand -hex 32)" \
ghcr.io/nitejar/nitejar:latestIn app
Create your first agent
- Open the dashboard at
/. - Go to Agents and click New Agent.
- Give it a name and a short soul description (this shapes the agent's personality).
- Save the agent.
Connect a channel
The simplest way to test is with the built-in Generic Webhook plugin:
- Go to Plugins in the sidebar.
- Find the Generic Webhook plugin and create a new instance.
- Assign your agent to the instance.
- Send a test webhook:
curl -X POST http://localhost:3000/api/webhooks/plugins/webhook/YOUR_INSTANCE_ID \
-H "Content-Type: application/json" \
-d '{"text": "Hello, what can you do?", "sender_id": "test-user"}'- Check Activity to see the agent process your message and produce a response.
Learn the operating surfaces
After your first message lands, walk the app in this order:
- Command Center -- confirm whether the system thinks anything needs attention.
- Company -- see how teams, roles, and ownership are modeled.
- Work -- inspect goals, tickets, and heartbeat-driven execution state.
- Activity -- inspect the exact receipts behind a single incoming event.
- Costs -- confirm spend and budget posture.
Follow the receipts
Every work item has a detail page showing:
- The inbound message and parsed payload
- The agent run timeline (inference calls, tool executions)
- Token counts and cost tracking
- The delivered response and its delivery status
This is the receipt trail. It is the source of truth for what the agent did and why.
Where to verify
Open Activity and click any work item. The timeline shows every step: webhook received, work item created, run dispatched, inference calls made, tools executed, response delivered. Each step has timestamps and cost.
Next steps
- Configuration -- Full environment variable reference, model config, database options.
- Your First Agent -- Step-by-step guided walkthrough with receipts.
- Command Center -- Learn the live attention surface.
- Company -- Learn the structure and staffing surface.
- Work -- Learn the goals, tickets, and heartbeat model.
- How Messages Flow -- Understand the journey from webhook to receipts.
- Agents -- Soul prompts, models, tools, memory, roles, and network policy.
Verify Media Capabilities (In-App, No Telegram Required)
Media tools can be validated entirely from the dashboard and receipts.
- Open Settings > Capabilities (
/settings/capabilities). - Configure and enable:
- Image Generation (uses gateway key)
- Speech To Text (uses gateway key)
- Text To Speech (requires provider key, OpenAI at launch)
- Create or reuse an agent.
- Trigger the agent through the Generic Webhook plugin with prompts that call:
generate_imagetranscribe_audiosynthesize_speech
- Inspect receipts:
- Work item timeline for tool invocations and outputs
- Costs dashboard for external API spend
media_artifactsrows in SQLite/Postgres for generated artifacts and metadata
This verifies capability wiring and receipts without testing Telegram transport.