This page is generated from the following source files:
Sim is an open-source platform for building AI agents and orchestrating agentic workflows. The platform supports over 1,000 integrations and multiple LLM providers, enabling visual workflow design on a canvas-based interface README.md:1-14.
The platform supports multiple deployment methods, each with specific requirements:
| Deployment Method | Core Requirements | Optional Dependencies |
|---|---|---|
| Cloud-Hosted | Modern web browser | None |
| NPM Package | Docker (installed and running) | None |
| Docker Compose | Docker, Docker Compose | Ollama (for local models) |
| Dev Containers | VS Code, Remote-Containers extension | None |
| Manual Setup | Bun, Node.js v20+, PostgreSQL 12+ with pgvector | Redis, Ollama |
Memory Requirements: The production Docker deployment allocates 8GB RAM for the main application and 1GB for the realtime socket server docker-compose.prod.yml:7-10 docker-compose.prod.yml:45-48. For local AI models with Ollama, systems should have 12GB+ RAM available README.md:17.
The following ports are used by default:
| Service | Port | Protocol |
|---|---|---|
| Sim Studio Web | 3000 | HTTP |
| Realtime Socket Server | 3002 | HTTP/WebSocket |
| PostgreSQL | 5432 | TCP |
| Ollama (optional) | 11434 | HTTP |
Port 3000 is exposed for the main application docker-compose.prod.yml:5-6, and port 3002 for the realtime socket server docker-compose.prod.yml:43-44.
The fastest way to use Sim is through the cloud-hosted platform at sim.ai. This requires no local installation or configuration README.md:41-45.
The cloud platform provides:
No verification steps are required for the cloud-hosted version—simply navigate to the URL and create an account.
The NPM package provides the quickest local setup option, wrapping Docker deployment in a single command.
Execute the following command to start Sim locally:
bash1npx simstudio
The application will be available at http://localhost:3000 after startup README.md:47-52.
Docker must be installed and running on the machine before executing the NPM command README.md:54-55. The NPM package internally uses Docker images configured for production deployment docker-compose.prod.yml:1-10.
| Flag | Description |
|---|---|
-p, --port <port> | Port to run Sim on (default: 3000) |
--no-pull | Skip pulling latest Docker images |
After running the command, verify the application is accessible by opening http://localhost:3000 in a web browser. The health check endpoint verifies the service is running via wget --spider --quiet http://127.0.0.1:3000 docker-compose.prod.yml:33-34.
Docker Compose deployment provides full control over the production environment with all services containerized.
Clone the repository and start services:
bash1git clone https://github.com/simstudioai/sim.git && cd sim 2docker compose -f docker-compose.prod.yml up -d
Access the application at http://localhost:3000 README.md:64-71.
The production deployment includes four services:
| Service | Image | Function |
|---|---|---|
| simstudio | ghcr.io/simstudioai/simstudio:latest | Main web application |
| realtime | ghcr.io/simstudioai/realtime:latest | WebSocket server for real-time updates |
| migrations | ghcr.io/simstudioai/migrations:latest | Database migration runner |
| db | pgvector/pgvector:pg17 | PostgreSQL with vector extension |
For offline AI capabilities, deploy with Ollama integration:
bash1# Start with GPU support (automatically downloads gemma3:4b model) 2docker compose -f docker-compose.ollama.yml --profile setup up -d 3 4# For CPU-only systems: 5docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -d
After the model downloads, access the application at http://localhost:3000 README.md:73-85.
The Ollama configuration sets OLLAMA_URL=http://ollama:11434 to connect to the Ollama container docker-compose.ollama.yml:23.
To add additional models:
bash1docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8b
If Ollama is already running on the host machine:
bash1OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d
On Linux, use the host's IP address or configure extra_hosts in the compose file README.md:90-98.
Sim supports vLLM for self-hosted models. Set the following environment variables:
VLLM_BASE_URL: Base URL for the vLLM serverVLLM_API_KEY: API key (optional)Check container status:
bash1docker compose -f docker-compose.prod.yml ps
All containers should show "healthy" status. The health checks verify:
http://127.0.0.1:3000 docker-compose.prod.yml:33-38http://127.0.0.1:3002/health docker-compose.prod.yml:60-65pg_isready -U postgres docker-compose.prod.yml:89-93Dev Containers provide an isolated development environment with all dependencies pre-configured.
Inside the dev container, execute:
bash1bun run dev:full
Alternatively, use the sim-start alias. This command starts both the main Next.js application and the realtime socket server README.md:108-109.
The realtime server configuration matches the production setup, running on port 3002 docker-compose.prod.yml:40-65.
Manual setup provides maximum control over the development environment but requires more configuration.
bash1git clone https://github.com/simstudioai/sim.git 2cd sim 3bun install
Using Docker:
bash1docker run --name simstudio-db -e POSTGRES_PASSWORD=your_password -e POSTGRES_DB=simstudio -p 5432:5432 -d pgvector/pgvector:pg17
Or install manually via the pgvector guide README.md:123-129.
The production Docker setup uses pgvector/pgvector:pg17 as the database image docker-compose.prod.yml:78-79.
bash1cp apps/sim/.env.example apps/sim/.env 2cp packages/db/.env.example packages/db/.env
Edit both .env files to set DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio" README.md:131-137.
| Variable | Required | Description |
|---|---|---|
DATABASE_URL | Yes | PostgreSQL connection string with pgvector |
BETTER_AUTH_SECRET | Yes | Auth secret (generate with openssl rand -hex 32) |
BETTER_AUTH_URL | Yes | App URL (e.g., http://localhost:3000) |
NEXT_PUBLIC_APP_URL | Yes | Public app URL (same as above) |
ENCRYPTION_KEY | Yes | Encrypts environment variables |
INTERNAL_API_SECRET | Yes | Encrypts internal API routes |
API_ENCRYPTION_KEY | Yes | Encrypts API keys |
COPILOT_API_KEY | No | API key from sim.ai for Copilot features |
These variables are also referenced in the Docker Compose production configuration docker-compose.prod.yml:11-25.
bash1cd packages/db && bunx drizzle-kit migrate --config=./drizzle.config.ts
bash1bun run dev:full
This starts both the Next.js app and realtime socket server. To run separately:
bun run devcd apps/sim && bun run dev:sockets README.md:145-151The following diagram illustrates the decision flow for selecting a deployment method:
正在加载图表渲染器...
Each service in the Docker Compose deployment includes health checks:
Main Application (simstudio):
http://127.0.0.1:3000Realtime Socket Server:
http://127.0.0.1:3002/healthPostgreSQL Database:
pg_isready -U postgresCheck all container statuses:
bash1docker compose -f docker-compose.prod.yml ps
Expected output should show all services as "healthy" or "running".
View logs for troubleshooting:
bash1docker compose -f docker-compose.prod.yml logs -f simstudio 2docker compose -f docker-compose.prod.yml logs -f realtime
Symptom: Container fails to start with "port is already allocated" error.
Cause: Ports 3000, 3002, or 5432 are already in use by other services README.md:17.
Solution:
lsof -i :3000 (macOS/Linux) or netstat -ano | findstr :3000 (Windows)docker-compose.prod.yml-p flag with NPM package: npx simstudio -p 3001Symptom: "Cannot connect to the Docker daemon" error.
Cause: Docker is not installed or the Docker daemon is not running README.md:17.
Solution:
docker --versiondocker infoSymptom: Containers are killed or become unresponsive.
Cause: System has insufficient RAM. The main application requires 8GB and realtime server requires 1GB docker-compose.prod.yml:7-10 docker-compose.prod.yml:45-48. With Ollama, 12GB+ total RAM is recommended README.md:17.
Solution:
docker statsSymptom: Application fails with "connection refused" or "database does not exist" errors.
Cause: Database service is not healthy or DATABASE_URL is misconfigured.
Solution:
docker compose -f docker-compose.prod.yml ps dbpg_isready -U postgres for health checks docker-compose.prod.yml:89-90DATABASE_URL format matches: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@db:5432/${POSTGRES_DB:-simstudio} docker-compose.prod.yml:13Symptom: Migrations container exits with error, application fails to start.
Cause: Database schema is not initialized or migration files are corrupted.
Solution:
docker compose -f docker-compose.prod.yml logs migrationsbun run db:migrate docker-compose.prod.yml:75cd packages/db && bunx drizzle-kit migrate --config=./drizzle.config.ts README.md:141-142Symptom: Ollama container starts but AI features don't work.
Cause: Model download is incomplete or failed.
Solution:
--profile setup) README.md:78-79docker compose -f docker-compose.ollama.yml logs ollamadocker compose -f docker-compose.ollama.yml exec ollama ollama pull gemma3:4bAfter successfully deploying Sim, consider the following:
COPILOT_API_KEY for AI-assisted workflow building README.md:153-158For detailed usage instructions, refer to the official documentation README.md:13.