An AI-powered, high-interaction SSH honeypot that simulates a realistic shell environment while visualizing attacker behavior in real-time
Find a file
2026-01-16 20:22:49 +00:00
backend Add Top IPs ranking and fix session sorting/duplication 2026-01-15 22:40:19 +00:00
frontend Add Top IPs ranking and fix session sorting/duplication 2026-01-15 22:40:19 +00:00
.env.example Fix AUTH: Pass ALLOWED_USERS env var to backend container 2026-01-16 20:22:49 +00:00
.structure Initial commit: SSH Honeypot with LLM backend (OpenAI/Ollama) and React frontend 2026-01-15 19:24:20 +00:00
docker-compose.yml Fix AUTH: Pass ALLOWED_USERS env var to backend container 2026-01-16 20:22:49 +00:00
implementation_plan.md Implement network hardening, strict auth, and update docs 2026-01-15 19:48:05 +00:00
README.md Harden LLM system prompt to prevent markdown output 2026-01-15 21:30:56 +00:00
task.md Add network isolation, statistics tab with timeframe, persistent map markers 2026-01-15 20:28:47 +00:00

SSH Honeypot with LLM Backend

A high-interaction SSH honeypot that uses an LLM to simulate a realistic shell environment. All attacker interactions are logged and can be viewed in real-time through a web dashboard with a world map visualization.

Features

  • 🔐 SSH Honeypot - Accepts all authentication attempts (optional strict mode)
  • 🤖 LLM-Powered Responses - Uses OpenAI or local Ollama to generate realistic shell output
  • 🌍 World Map Visualization - See attack origins in real-time with zoomable map
  • 📊 Real-time Dashboard - Monitor attacks as they happen via WebSocket
  • 📈 Statistics Tab - Top commands, country rankings, with timeframe selector
  • 📜 Session Replay - Review complete attack sessions
  • 🐳 Fully Dockerized - Easy deployment with Docker Compose

Quick Start

1. Clone and Configure

git clone <repo>
cd ssh-honeypot
cp .env.example .env

Edit .env and set your LLM provider:

For OpenAI:

LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here

For Local Ollama:

LLM_PROVIDER=ollama
OLLAMA_MODEL=llama3.2

2. Start the Services

Using OpenAI:

docker-compose up -d

Using Ollama:

docker-compose --profile ollama up -d

3. Access

Service URL
Dashboard http://localhost:5173
SSH Honeypot ssh root@<host-ip> -p 2222

Configuration

All settings in .env:

Variable Default Description
LLM_PROVIDER openai openai or ollama
OPENAI_API_KEY - Your OpenAI API key
OPENAI_MODEL gpt-4o-mini OpenAI model to use
OLLAMA_MODEL llama3.2 Ollama model to use
SSH_PORT 2222 External SSH port
SSH_HOSTNAME ubuntu-server Hostname in shell prompt
ALLOWED_USERS - Restrict to specific users (user:pass,user2:pass2)
LLM_SYSTEM_PROMPT - Custom LLM prompt (use {cwd} and {hostname})

Architecture

┌─────────────────────────────────────────────────────────┐
│              Docker Network (172.28.0.0/16)             │
│                                                         │
│  ┌────────────┐   ┌────────────┐   ┌────────────┐      │
│  │  Frontend  │   │  Backend   │   │   Ollama   │      │
│  │  (React)   │◄──│ (FastAPI)  │──►│ (optional) │      │
│  │   :5173    │   │ :8000/2222 │   │   :11434   │      │
│  └────────────┘   └────────────┘   └────────────┘      │
│                          │                              │
│                   ┌──────┴──────┐                      │
│                   │   SQLite    │                      │
│                   │    + Geo    │                      │
│                   └─────────────┘                      │
└─────────────────────────────────────────────────────────┘

Key Points:

  • SSH exposed on all interfaces (0.0.0.0)
  • Frontend accessible to other Docker containers
  • Backend API is read-only (no POST/PUT/DELETE)
  • Ollama only starts with --profile ollama
  • Auto database migration for schema updates

Security Notes

⚠️ Warning: This is a honeypot designed to be attacked.

  • Deploy in an isolated environment (dedicated VM/VPS)
  • The API is read-only, but dashboard has no authentication
  • Monitor LLM API costs
  • Review logs regularly

Development

Backend:

cd backend
pip install -r requirements.txt
python -m uvicorn app.main:app --reload

Frontend:

cd frontend
npm install
npm run dev

Troubleshooting

  • LLM Outputting Markdown/Code Blocks: The system prompt is tuned to prevent this, but some models (especially smaller ones) may still occasionally output markdown. You can adjust LLM_SYSTEM_PROMPT in .env or modify backend/app/llm_client.py to be stricter.
  • Connection Refused: Ensure the backend container is running (docker-compose ps).
  • Ollama Errors: Verify Ollama is running and the model is pulled (docker exec -it honeypot-ollama ollama list).

License

MIT