Setup Options
Choose the installation method that best suits your needs:Option A: Using pyspur
Python Package
This is the quickest way to get started. Python 3.12 or higher is required.
1. Install PySpur
1. Install PySpur
2. Initialize a new project
2. Initialize a new project
.env
file.3. Start the server
3. Start the server
http://localhost:6080
using a sqlite database.
We recommend you configure a postgres instance URL in the .env
file to get a more stable experience.4. Customize Your Deployment
4. Customize Your Deployment
You can customize your PySpur deployment in two ways:a. Through the app (Recommended):
- Navigate to the API Keys tab in the app
- Add your API keys for various providers (OpenAI, Anthropic, etc.)
- Changes take effect immediately
- Edit the
.env
file in your project directory - It is recommended to configure a postgres database in .env for more reliability
- Restart the app with
pyspur serve
. Add--sqlite
if you are not using postgres
Option B: Using Docker (Recommended for Scalable, In-Production Systems)
This is the recommended way for production deployments:1. Install Docker
1. Install Docker
First, install Docker by following the official installation guide for your operating system:
2. Create a PySpur Project
2. Create a PySpur Project
Once Docker is installed, create a new PySpur project with:This will:
- Start a new PySpur project in a new directory called
pyspur-project
- Set up the necessary configuration files
- Start PySpur app automatically backed by a local postgres docker instance
3. Access PySpur
3. Access PySpur
Go to
http://localhost:6080
in your browser.4. Customize Your Deployment
4. Customize Your Deployment
You can customize your PySpur deployment in two ways:a. Through the app (Recommended):
- Navigate to the API Keys tab in the app
- Add your API keys for various providers (OpenAI, Anthropic, etc.)
- Changes take effect immediately
- Edit the
.env
file in your project directory - Restart the services with:
Using Local Models with Ollama
Configure Ollama
Configure Ollama
- Start Ollama service with:
- Update your
.env
file with:
- Download models using:
ollama pull <model-name>
- Select Ollama models from the sidebar for LLM nodes
Next Steps
After installation, you can:- 🪄 Create New Workflow Click “New Spur” to create a workflow from scratch
- 📋 Use Templates Start with one of our pre-built templates
- 💾 Import Spur JSONs Import spurs shared by other users
- 🌐 Deploy as API Single click using the “Deploy” button in the top bar