A powerful, AI-driven Telegram bot for code generation, debugging, and intelligent conversations
- AI-Powered Conversations - Natural, context-aware conversations powered by Google's Gemma 2
- Code Generation & Debugging - Generate, analyze, and debug code in any programming language
- Channel Membership Verification - Require users to join a specific channel before using the bot
- Conversation Memory - Maintains context across messages with configurable history limit
- Smart Message Splitting - Automatically handles long responses within Telegram's limits
- Health Check Endpoints - Built-in HTTP endpoints for monitoring and deployment platforms
- Cloud Ready - Dockerized for easy deployment on any platform
| Command | Description |
|---|---|
/start |
Start the bot and verify channel membership |
/help |
Show available commands and features |
/clear |
Clear conversation history and start fresh |
- Python 3.10+ - Core language
- aiogram 3.x - Modern Telegram Bot API framework
- aiohttp - Async HTTP client for API calls
- OpenRouter API - AI model inference
- Docker - Containerization
- Python 3.10 or higher
- Telegram Bot Token (from @BotFather)
- OpenRouter API Key (from openrouter.ai)
-
Clone the repository
git clone https://github.com/shafayat83/Scriptify.git cd scriptify -
Install dependencies
pip install -r requirements.txt
-
Set up environment variables
cp .env.example .env # Edit .env with your tokens -
Run the bot
python main.py
# Build and run
docker-compose up -d
# View logs
docker-compose logs -fThe bot is configured for easy deployment on:
- cloud.tranger.xyz - See
deploy.yaml - Railway - See
railway.json - Choreo - See
DEPLOY.md - Any Docker-compatible platform
For detailed deployment instructions, see DEPLOY.md.
| Variable | Required | Description |
|---|---|---|
TELEGRAM_BOT_TOKEN |
Yes | Bot token from @BotFather |
OPENROUTER_API_KEY |
Yes | API key from openrouter.ai |
PORT |
No | HTTP port for health checks (default: 8000) |
Edit these in main.py:
CHANNEL_USERNAME = "@YourChannel" # Channel users must join
OPENROUTER_MODEL = "google/gemma-2-9b-it" # AI model to use
MAX_HISTORY_PER_USER = 20 # Conversation memory limitWhen deployed, these endpoints are available on the configured port:
| Endpoint | Description |
|---|---|
GET /health |
Health status and bot connection state |
GET /ready |
Readiness probe for load balancers |
GET / |
Service confirmation |
scriptify/
├── main.py # Bot application code
├── requirements.txt # Python dependencies
├── Dockerfile # Container configuration
├── docker-compose.yml # Docker Compose setup
├── deploy.yaml # Cloud deployment config
├── railway.json # Railway.app config
├── .env.example # Environment template
├── .dockerignore # Docker ignore rules
├── .gitignore # Git ignore rules
└── DEPLOY.md # Deployment guide
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- aiogram - Telegram Bot framework
- OpenRouter - AI model aggregation platform
- Google Gemma - Open AI models
Made with ❤️ by Shafayat