An AI-powered, high-interaction SSH honeypot that simulates a realistic shell environment while visualizing attacker behavior in real-time
| backend | ||
| frontend | ||
| .env.example | ||
| .structure | ||
| docker-compose.yml | ||
| implementation_plan.md | ||
| README.md | ||
| task.md | ||
SSH Honeypot with LLM Backend
A high-interaction SSH honeypot that uses an LLM to simulate a realistic shell environment. All attacker interactions are logged and can be viewed in real-time through a web dashboard with a world map visualization.
Features
- 🔐 SSH Honeypot - Accepts all authentication attempts (optional strict mode)
- 🤖 LLM-Powered Responses - Uses OpenAI or local Ollama to generate realistic shell output
- 🌍 World Map Visualization - See attack origins in real-time with zoomable map
- 📊 Real-time Dashboard - Monitor attacks as they happen via WebSocket
- 📈 Statistics Tab - Top commands, country rankings, with timeframe selector
- 📜 Session Replay - Review complete attack sessions
- 🐳 Fully Dockerized - Easy deployment with Docker Compose
Quick Start
1. Clone and Configure
git clone <repo>
cd ssh-honeypot
cp .env.example .env
Edit .env and set your LLM provider:
For OpenAI:
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
For Local Ollama:
LLM_PROVIDER=ollama
OLLAMA_MODEL=llama3.2
2. Start the Services
Using OpenAI:
docker-compose up -d
Using Ollama:
docker-compose --profile ollama up -d
3. Access
| Service | URL |
|---|---|
| Dashboard | http://localhost:5173 |
| SSH Honeypot | ssh root@<host-ip> -p 2222 |
Configuration
All settings in .env:
| Variable | Default | Description |
|---|---|---|
LLM_PROVIDER |
openai |
openai or ollama |
OPENAI_API_KEY |
- | Your OpenAI API key |
OPENAI_MODEL |
gpt-4o-mini |
OpenAI model to use |
OLLAMA_MODEL |
llama3.2 |
Ollama model to use |
SSH_PORT |
2222 |
External SSH port |
SSH_HOSTNAME |
ubuntu-server |
Hostname in shell prompt |
ALLOWED_USERS |
- | Restrict to specific users (user:pass,user2:pass2) |
LLM_SYSTEM_PROMPT |
- | Custom LLM prompt (use {cwd} and {hostname}) |
Architecture
┌─────────────────────────────────────────────────────────┐
│ Docker Network (172.28.0.0/16) │
│ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Frontend │ │ Backend │ │ Ollama │ │
│ │ (React) │◄──│ (FastAPI) │──►│ (optional) │ │
│ │ :5173 │ │ :8000/2222 │ │ :11434 │ │
│ └────────────┘ └────────────┘ └────────────┘ │
│ │ │
│ ┌──────┴──────┐ │
│ │ SQLite │ │
│ │ + Geo │ │
│ └─────────────┘ │
└─────────────────────────────────────────────────────────┘
Key Points:
- SSH exposed on all interfaces (
0.0.0.0) - Frontend accessible to other Docker containers
- Backend API is read-only (no POST/PUT/DELETE)
- Ollama only starts with
--profile ollama - Auto database migration for schema updates
Security Notes
⚠️ Warning: This is a honeypot designed to be attacked.
- Deploy in an isolated environment (dedicated VM/VPS)
- The API is read-only, but dashboard has no authentication
- Monitor LLM API costs
- Review logs regularly
Development
Backend:
cd backend
pip install -r requirements.txt
python -m uvicorn app.main:app --reload
Frontend:
cd frontend
npm install
npm run dev
Troubleshooting
- LLM Outputting Markdown/Code Blocks: The system prompt is tuned to prevent this, but some models (especially smaller ones) may still occasionally output markdown. You can adjust
LLM_SYSTEM_PROMPTin.envor modifybackend/app/llm_client.pyto be stricter. - Connection Refused: Ensure the backend container is running (
docker-compose ps). - Ollama Errors: Verify Ollama is running and the model is pulled (
docker exec -it honeypot-ollama ollama list).
License
MIT