A comprehensive environment for testing and comparing different LLM providers and tools.
- Support for multiple LLM providers (OpenAI, Anthropic, etc.)
- Vector store integration (Chroma, Pinecone, etc.)
- GUI interface for easy testing
- Comprehensive testing suite
- Docker support
- Automated setup and configuration
- Clone the repository:
git clone https://github.com/yourusername/llm-dev-env.git
cd llm-dev-env
- Install the environment:
make install
- Copy
.env.template
to.env
and add your API keys:
cp .env.template .env
# Edit .env with your API keys
- Set up the environment:
make setup
- Start the GUI:
make gui
make install
: Install the environmentmake setup
: Initialize the environmentmake test
: Run testsmake docker
: Build and start Docker containermake clean
: Clean up temporary filesmake format
: Format codemake lint
: Run lintersmake jupyter
: Start Jupyter Labmake gui
: Start Streamlit GUI
llm-dev-env/
├── config/ # Configuration files
├── data/ # Data storage
├── output/ # Output files
├── logs/ # Log files
├── prompts/ # Saved prompts
├── src/ # Source code
├── tests/ # Test files
├── .env # Environment variables
├── Dockerfile # Docker configuration
├── docker-compose.yml
├── Makefile
└── README.md
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
MIT License
5. Finally, add a `.gitignore`:
```gitignore
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# Virtual Environment
venv/
ENV/
# IDE
.idea/
.vscode/
*.swp
*.swo
# Project specific
.env
logs/
output/
data/
.cache/
# Jupyter
.ipynb_checkpoints
*.ipynb
# Testing
.coverage
htmlcov/
.pytest_cache/
.mypy_cache/
# Docker
.docker/
To use everything:
- Clone/download all files to your project directory
- Run:
make install
# Edit .env with your API keys
make setup
make gui # For the Streamlit interface
The Streamlit GUI provides an easy way to:
- Test different LLM providers
- Save and load prompts
- Adjust parameters
- Compare responses
- View and export results