Optimize Engineer your Prompts for better agent trajectories.
- Node.js 18+
uv
(Python package/runtime manager)- OpenAI API key and AI Gateway API key
# Install dependencies
npm install
# Start the web app and the Python optimizer together
npm run dev:all
The web app runs at http://localhost:3000
. The optimizer service runs locally and is started for you.
- Start with a base prompt by updating
data/prompt.md
. - Enter Teach Mode and provide a scenario (what the AI should do).
- Chat, then collect ideal samples by selecting your question and the AI's answer.
- Collect as many samples as you like to cover different cases.
- Go to the Optimize tab and click Optimize.
- Watch live optimization in the History tab.
- The final prompt is saved to
data/prompt.md
and shown in the Prompt tab. - Optimization versions are saved in
data/versions
and listed in the History tab. - Try the Chat again using the updated prompt from the latest run.
- Edit
src/lib/tools.ts
to add or replace tools. It contains placeholder tools; anything you define/export there (usingtool(...)
) is automatically available to the agent in chat and optimization—no extra wiring needed.
- DSPy provides the optimization framework used by this project.
- GEPA (Genetic-Pareto) is a reflective optimizer that evolves prompts using textual feedback and Pareto-based selection.
- A lightweight Python service exposes the optimizer; the web app calls it via
/api/optimize
. - Artifacts written by optimization:
data/prompt.md
— current optimized promptdata/complete-optimization.json
— full optimization results and metadatadata/versions/
— versioned optimization runs and histories
Learn more: DSPy Documentation · GEPA Optimizer · GEPA Tweet · GEPA Paper
Built by the team that built Langtrace AI and Zest AI.
Apache-2.0. See LICENSE
.