Skip to content

hansesm/COFFEE

Repository files navigation

COFFEE - Corrective Formative Feedback

AI-powered feedback system for educational institutions using Django and Large Language Models.

COFFEES Startpage

Quick Demo with Docker Compose

Try COFFEE instantly with a single command! Download the docker-compose.demo.yml file and run:

docker compose -f docker-compose.demo.yml up

Or use this one-liner (macOS/Linux/Windows):

curl -O https://raw.githubusercontent.com/hansesm/coffee/main/docker-compose.demo.yml && docker compose -f docker-compose.demo.yml up

Windows (PowerShell):

Invoke-WebRequest -Uri https://raw.githubusercontent.com/hansesm/coffee/main/docker-compose.demo.yml -OutFile docker-compose.demo.yml; docker compose -f docker-compose.demo.yml up

This spins up PostgreSQL, Ollama (with phi4 model), and the app itself using the pre-built image ghcr.io/hansesm/coffee:latest. On startup, migrations run automatically, default users are created, and demo data is imported.

Note: The phi4 model download can take a while. Ollama may run slowly or time out when running in Docker. You can adjust the request_timeout setting in the Admin Panel to prevent timeouts.

Access the app at http://localhost:8000.

To tear everything down:

docker compose -f docker-compose.demo.yml down -v

The demo environment is restart-safe. If you stop and restart the containers, existing data will be preserved and the startup commands will detect existing users and demo data automatically.

Getting Started

  1. Prerequisites
  • Install uv
  1. Clone and setup

    git clone <repository-url>
    cd COFFEE
    uv venv --python 3.13
    uv sync
  2. Configure environment

    cp .env.example .env
    # Edit .env with your settings
  3. Setup database Without the env variable DATABASE_URL django creates a sqlite database:

    uv run task migrate
    uv run task create-groups

    If you want to use a PostgreSQL database, you can spin it up with Docker Compose:

    docker compose up -d 
    uv run task migrate
    uv run task create-groups
  4. Run

    uv run task server

Optional: Local Ollama Setup for Development

  1. Install Ollama

  2. Start the Ollama service

    • After installation the daemon normally starts automatically. You can verify with:
      ollama serve
      (Press Ctrl+C to stop if it is already running in the background.)
  3. Download a model

    ollama pull phi4
  4. Test the model locally

    ollama run phi4

    The default API endpoint is available at http://localhost:11434.

  5. Register Ollama in Django Admin

    • Sign in at <BASE_URL>/admin.
    • Go to LLM ProvidersAdd, pick Ollama, set the host (e.g. http://localhost:11434), and save.
    • Go to LLM ModelsAdd, select the newly created Ollama provider, enter the model name (e.g. phi4), choose a display name, and save.
    • The provider and model can now be assigned to tasks and criteria inside the app.

Optional: Populate Database with Demo Data

uv run task import-demo-data

Configuration

All configuration is environment-based. Copy .env.example to .env and customize:

Required Settings

# Django (REQUIRED)
SECRET_KEY=your-secret-key-here  
DEBUG=True
DB_PASSWORD=<YOUR_DB_PASSWORD>
DB_USERNAME=<user>
DB_HOST=<host>
DB_PORT=<port>
DB_NAME=<db>
DB_PROTOCOL=<postgres|sqlite>

Custom LLM Providers

You can add your own LLM Providers and LLM Models in the Django Admin Panel (<BASE_URL>/admin).

Currently supported LLM Providers:

Contributions for additional providers such as LLM Lite, AWS Bedrock, Hugging Face, and others are very welcome! 🚀

LLM Backends

Add providers and models in the Django admin under LLM Providers / LLM Models. Each backend needs different connection details:

  • Ollama – Set Endpoint to your Ollama host (e.g. http://ollama.local:11434 or http://localhost:11434). Leave the API key empty unless you enabled token auth; optional TLS settings live in the JSON config.
  • Azure AI – Use the Inference endpoint that already includes the deployment segment, for example https://<azure-resource>/openai/deployments/<deployment>. Add the matching API key.
  • Azure OpenAI – Point Endpoint to the service base URL like https://<azure-resource>.cognitiveservices.azure.com/. Add the matching API key.

Default Login Credentials

After running python manage.py create_users_and_groups, use these credentials:

  • Admin: username admin, password reverence-referee-lunchbox
  • Manager: username manager, password expediter-saline-untapped

Usage

  1. Admin: Create courses, tasks, and criteria at /admin/
  2. Students: Submit work and receive AI feedback
  3. Analysis: View feedback analytics and export data

Docker Deployment

docker build -t coffee .
docker run -p 8000:8000 --env-file .env coffee #On Windows add '--network host'  

Podman Deployment (RedHat/RHEL)

For RedHat Enterprise Linux systems using Podman:

# Install podman if not already installed
sudo dnf install podman
# To see if you already have podman
podman --version

# Install git if not already installed
sudo dnf install git
# To see if you already have git
git --version

# Install python3 if not already installed 
sudo dnf install python3
# To see if you already have python3
python3 --version

# Optional:Install nano if not already installed
sudo dnf install nano
# To see if you already have nano
nano --version

# Clone the COFFEE repository
git clone <REPOSITORY-URL>

# Enter the COFFEE directory
cd COFFEE

# Copy and configure the environment
cp .env.example .env

# Edit .env with your actual configuration values (e.g. with nano)
nano .env

# Make the startup script executable (only required once)
chmod +x run-podman.sh

# Start COFFEE (runs without Ollama, perfect for servers)
./run-podman.sh

# Verify that all containers are running
podman pod ps
podman ps -a

# Test if the application is reachable
curl -I http://localhost:8000

# Now open the browser:
# http://<Your IP or localhost>:8000

Creating a New Release

To release a new version of COFFEE, follow these steps. The GitHub Actions CI/CD pipeline will automatically build the Docker image and push it to the GitHub Container Registry (ghcr.io) upon pushing a new version tag.

  1. Update the version number in pyproject.toml:

    [project]
    name = "coffee"
    version = "X.Y.Z"  # Increment this to your new version
  2. Commit your changes:

    git add pyproject.toml uv.lock  # Include any other files you modified
    git commit -m "chore: bump version to X.Y.Z"
    git push origin main
  3. Create and push a Git tag: The tag must start with v (e.g., vX.Y.Z) to trigger the Docker build workflow.

    git tag vX.Y.Z
    git push origin vX.Y.Z

Once the tag is pushed, you can monitor the GitHub Actions tab to watch the Docker image being built and pushed to the registry.

Credits

This project was developed with assistance from Claude Code, Anthropic's AI coding assistant.

License

See LICENSE.md for details.

About

COFFEE provides students with feedback on their answers to free-text questions. This feedback is based on criteria specified by teachers. This means that COFFEE knows what is important in the exam.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors