AI-powered feedback system for educational institutions using Django and Large Language Models.
Try COFFEE instantly with a single command! Download the docker-compose.demo.yml file and run:
docker compose -f docker-compose.demo.yml upOr use this one-liner (macOS/Linux/Windows):
curl -O https://raw.githubusercontent.com/hansesm/coffee/main/docker-compose.demo.yml && docker compose -f docker-compose.demo.yml upWindows (PowerShell):
Invoke-WebRequest -Uri https://raw.githubusercontent.com/hansesm/coffee/main/docker-compose.demo.yml -OutFile docker-compose.demo.yml; docker compose -f docker-compose.demo.yml upThis spins up PostgreSQL, Ollama (with phi4 model), and the app itself using the pre-built image ghcr.io/hansesm/coffee:latest. On startup, migrations run automatically, default users are created, and demo data is imported.
Note: The phi4 model download can take a while. Ollama may run slowly or time out when running in Docker. You can adjust the request_timeout setting in the Admin Panel to prevent timeouts.
Access the app at http://localhost:8000.
To tear everything down:
docker compose -f docker-compose.demo.yml down -vThe demo environment is restart-safe. If you stop and restart the containers, existing data will be preserved and the startup commands will detect existing users and demo data automatically.
- Prerequisites
- Install uv
-
Clone and setup
git clone <repository-url> cd COFFEE uv venv --python 3.13 uv sync
-
Configure environment
cp .env.example .env # Edit .env with your settings -
Setup database Without the env variable
DATABASE_URLdjango creates a sqlite database:uv run task migrate uv run task create-groups
If you want to use a PostgreSQL database, you can spin it up with Docker Compose:
docker compose up -d uv run task migrate uv run task create-groups
-
Run
uv run task server
-
Install Ollama
- Follow the official instructions at ollama.com/download for your platform.
-
Start the Ollama service
- After installation the daemon normally starts automatically. You can verify with:
(Press
ollama serve
Ctrl+Cto stop if it is already running in the background.)
- After installation the daemon normally starts automatically. You can verify with:
-
Download a model
ollama pull phi4
-
Test the model locally
ollama run phi4
The default API endpoint is available at
http://localhost:11434. -
Register Ollama in Django Admin
- Sign in at
<BASE_URL>/admin. - Go to LLM Providers → Add, pick Ollama, set the host (e.g.
http://localhost:11434), and save. - Go to LLM Models → Add, select the newly created Ollama provider, enter the model name (e.g.
phi4), choose a display name, and save. - The provider and model can now be assigned to tasks and criteria inside the app.
- Sign in at
uv run task import-demo-dataAll configuration is environment-based. Copy .env.example to .env and customize:
# Django (REQUIRED)
SECRET_KEY=your-secret-key-here
DEBUG=True
DB_PASSWORD=<YOUR_DB_PASSWORD>
DB_USERNAME=<user>
DB_HOST=<host>
DB_PORT=<port>
DB_NAME=<db>
DB_PROTOCOL=<postgres|sqlite>You can add your own LLM Providers and LLM Models in the Django Admin Panel (<BASE_URL>/admin).
Currently supported LLM Providers:
- Ollama – see
ollama_api.py - Azure – see
azure_ai_api.py - Azure OpenAI – see
azure_openai_api.py
Contributions for additional providers such as LLM Lite, AWS Bedrock, Hugging Face, and others are very welcome! 🚀
Add providers and models in the Django admin under LLM Providers / LLM Models. Each backend needs different connection details:
- Ollama – Set
Endpointto your Ollama host (e.g.http://ollama.local:11434orhttp://localhost:11434). Leave the API key empty unless you enabled token auth; optional TLS settings live in the JSONconfig. - Azure AI – Use the Inference endpoint that already includes the deployment segment, for example
https://<azure-resource>/openai/deployments/<deployment>. Add the matching API key. - Azure OpenAI – Point
Endpointto the service base URL likehttps://<azure-resource>.cognitiveservices.azure.com/. Add the matching API key.
After running python manage.py create_users_and_groups, use these credentials:
- Admin: username
admin, passwordreverence-referee-lunchbox - Manager: username
manager, passwordexpediter-saline-untapped
- Admin: Create courses, tasks, and criteria at
/admin/ - Students: Submit work and receive AI feedback
- Analysis: View feedback analytics and export data
docker build -t coffee .
docker run -p 8000:8000 --env-file .env coffee #On Windows add '--network host' For RedHat Enterprise Linux systems using Podman:
# Install podman if not already installed
sudo dnf install podman
# To see if you already have podman
podman --version
# Install git if not already installed
sudo dnf install git
# To see if you already have git
git --version
# Install python3 if not already installed
sudo dnf install python3
# To see if you already have python3
python3 --version
# Optional:Install nano if not already installed
sudo dnf install nano
# To see if you already have nano
nano --version
# Clone the COFFEE repository
git clone <REPOSITORY-URL>
# Enter the COFFEE directory
cd COFFEE
# Copy and configure the environment
cp .env.example .env
# Edit .env with your actual configuration values (e.g. with nano)
nano .env
# Make the startup script executable (only required once)
chmod +x run-podman.sh
# Start COFFEE (runs without Ollama, perfect for servers)
./run-podman.sh
# Verify that all containers are running
podman pod ps
podman ps -a
# Test if the application is reachable
curl -I http://localhost:8000
# Now open the browser:
# http://<Your IP or localhost>:8000To release a new version of COFFEE, follow these steps. The GitHub Actions CI/CD pipeline will automatically build the Docker image and push it to the GitHub Container Registry (ghcr.io) upon pushing a new version tag.
-
Update the version number in
pyproject.toml:[project] name = "coffee" version = "X.Y.Z" # Increment this to your new version
-
Commit your changes:
git add pyproject.toml uv.lock # Include any other files you modified git commit -m "chore: bump version to X.Y.Z" git push origin main
-
Create and push a Git tag: The tag must start with
v(e.g.,vX.Y.Z) to trigger the Docker build workflow.git tag vX.Y.Z git push origin vX.Y.Z
Once the tag is pushed, you can monitor the GitHub Actions tab to watch the Docker image being built and pushed to the registry.
This project was developed with assistance from Claude Code, Anthropic's AI coding assistant.
See LICENSE.md for details.