You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Test your prompts, agents, and RAGs. Red teaming/pentesting/vulnerability scanning for AI. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration. Used by OpenAI and Anthropic.
Prompture is an API-first library for requesting structured JSON output from LLMs (or any structure), validating it against a schema, and running comparative tests between models.
prompt-evaluator is an open-source toolkit for evaluating, testing, and comparing LLM prompts. It provides a GUI-driven workflow for running prompt tests, tracking token usage, visualizing results, and ensuring reliability across models like OpenAI, Claude, and Gemini.
Test Claude Projects without copy-pasting. Local workbench for prompt engineering, agent testing, and workflow iteration. Direct Claude.ai access via cookie auth, 20+ prompt templates, web fetch/search tools, file uploads. Stop switching tabs to test your prompts.
curl for prompts. Run .prompt files against any LLM (Anthropic, OpenAI, Ollama) from the terminal. Treat prompts as code — version them, review them in PRs, and test them in CI.
OWASP LLM Top 10 vulnerability scanner CLI — test your AI endpoints for prompt injection, jailbreaks, data leakage & more. Fast red-teaming tool with pass/fail reports + fix recommendations. 🛡️
AI agent that helps you create, test, and iterate on LLM prompts. Saves versioned artifacts, generates test samples, runs evaluations, and provides detailed performance analysis.
PromptGuard is a pragmatic, opinionated framework for establishing continuous integration for LLM behavior. It operates on a simple, verifiable principle: run the same prompts across multiple model configurations, compare outputs against defined expectations, and flag semantic regressions.