MiroFish-Offline: The Offline AI Agent That Simulates What the World Will Think Before You Hit Publish
MiroFish-Offline: The Offline AI Agent That Simulates What the World Will Think Before You Hit Publish
In Depth of mirofish offline the offline ai agent
What Is MiroFish-Offline?
Imagine you are about to publish a press release. Before it goes live you want to know how the internet is going to react. Will people trust the message? Will there be backlash? Which parts of the announcement will get twisted by social media? Until recently answering those questions meant hiring PR consultants or running focus groups and waiting days for feedback.
MiroFish-Offline changes that entirely. It is an open-source multi-agent simulation engine that lets you upload any document — a press release, a policy draft, a financial report or even a novel — and then watch hundreds of uniquely programmed AI agents tear into it. These agents post, argue, shift opinions and influence each other in real time on a simulated social platform. You get a structured analytical report at the end of it all.
The “Offline” part is not just a detail. It is the entire point. This is a fully local fork of the original Chinese-language MiroFish project. That means no API keys. No cloud costs. No data leaving your machine. Everything runs on your own hardware using Ollama for the language model and Neo4j Community Edition for the knowledge graph.
“Feed it a press release and watch hundreds of AI personalities argue about it — hour by hour — entirely on your own machine.”
With 587 GitHub stars and 134 forks at the time of writing MiroFish-Offline has quietly become one of the most interesting AI agent projects available in the open-source ecosystem right now.
Original MiroFish vs MiroFish-Offline
The original MiroFish was built by 666ghj and backed by Shanda Group for the Chinese market. It is a powerful engine but it has real barriers for English-speaking developers: a Chinese UI and a mandatory dependency on cloud services like Zep Cloud and DashScope.
Developer nikmcfly took the entire project and rebuilt it for local deployment. Here is exactly what changed:
| Feature | Original MiroFish | MiroFish-Offline |
|---|---|---|
| UI Language | Chinese | English — 1,000+ strings translated |
| Knowledge Graph | Zep Cloud (paid) | Neo4j CE 5.15 — free and local |
| Language Model | DashScope / OpenAI API | Ollama — runs qwen2.5, llama3 and others |
| Embeddings | Zep Cloud | nomic-embed-text via Ollama |
| Cloud Required? | Yes — API keys mandatory | Zero cloud dependencies |
Beyond the translation the fork introduces a clean abstraction layer between the application and the graph database. This means you could theoretically swap Neo4j for another graph database by implementing a single class. That kind of architectural thinking is rare in open-source AI projects.
How the Simulation Actually Works
MiroFish-Offline is not just a chatbot wrapper. It is a five-stage pipeline that goes from raw document to a structured social intelligence report. Here is how each stage plays out.
Stage 1 — Graph Build
The system reads your uploaded document and extracts entities: people, companies, events and the relationships between them. It then builds a knowledge graph using Neo4j with both individual and group memory for the agents that are about to come to life.
Stage 2 — Environment Setup
Hundreds of agent personas are generated. Each one has a unique personality, opinion bias, reaction speed, influence level and memory of past events. Think of it as casting a social media audience before the curtain goes up.
Stage 3 — Simulation
The agents start interacting on a simulated social platform. They post original thoughts, reply to each other, argue and shift their opinions based on what they encounter. The system tracks sentiment evolution, topic propagation and influence dynamics in real time. This is the part that genuinely surprises people when they see it for the first time.
Stage 4 — Report Generation
A dedicated ReportAgent steps in after the simulation ends. It analyzes the environment, runs focus group interviews with a sample of the agents, queries the knowledge graph for evidence and produces a structured analytical report you can actually use.
Stage 5 — Direct Interaction
This is the feature that makes MiroFish-Offline feel like science fiction. You can open a chat window and talk directly to any agent that participated in the simulation. Ask them why they posted what they posted. Their full memory and personality persists so the answers are contextually grounded.
💡 Pro Insight: The hybrid search layer uses a 0.7 × vector similarity + 0.3 × BM25 keyword search blend for memory retrieval. This gives agents a far more realistic sense of “remembering” compared to simple vector databases.
Architecture and Tech Stack
For the technically curious the architecture is clean and well thought out. The full stack looks like this:
- Backend: Python with Flask — handles the graph API, simulation engine and report generation
- Frontend: Vue.js — translated from Chinese across 20 files
- Knowledge Graph: Neo4j Community Edition 5.15
- LLM: Ollama — supports qwen2.5:32b, qwen2.5:14b, llama3 and any other Ollama-compatible model
- Embeddings: nomic-embed-text (768 dimensions) served via Ollama
- Search: Hybrid (vector + BM25) for memory and entity retrieval
- Deployment: Docker Compose for a one-command setup
One architectural decision worth highlighting: dependency injection via Flask’s app.extensions keeps the codebase free from global singletons. This is a serious software engineering choice that makes the codebase genuinely maintainable — not just functional.
The project also works with any OpenAI-compatible API endpoint so you can point it at Claude, GPT-4 or any other provider by changing two lines in the .env file.
Real-World Use Cases
The team behind MiroFish-Offline lists four primary use cases and every single one of them has real commercial value.
📢 PR Crisis Testing Simulate the full public reaction to a press release before it goes live. Identify which phrases will get weaponized and which audiences will push back hardest.
📈 Trading Signal Generation Feed it a financial news release and watch simulated market participants react. Observe sentiment dynamics before the real market opens.
🏛️ Policy Impact Analysis Test draft regulations against simulated public response. Understand which demographic groups will champion or resist a proposed policy.
🎭 Creative Experiments One user fed it a classical Chinese novel with a missing ending. The agent swarm wrote a narratively consistent conclusion. The creative applications are genuinely open-ended.
What all four of these have in common is a need for speed, privacy and zero cloud costs. That is exactly what this tool delivers.
Getting Started in 5 Minutes
The fastest way to run MiroFish-Offline is through Docker. The entire setup is three commands:
# Clone the repository
git clone https://github.com/nikmcfly/MiroFish-Offline.git
cd MiroFish-Offline
cp .env.example .env
# Start Neo4j, Ollama and MiroFish together
docker compose up -d
# Pull the required AI models into Ollama
docker exec mirofish-ollama ollama pull qwen2.5:32b
docker exec mirofish-ollama ollama pull nomic-embed-text
After that open http://localhost:3000 in your browser and the UI is waiting for you. No account creation. No API key entry. No credit card.
If you prefer to run things manually the repository includes step-by-step instructions for starting Neo4j and Ollama separately before launching the Python backend and Vue frontend independently. The manual setup is slightly more involved but useful if you already have Neo4j or Ollama running on your machine.
Hardware Requirements
This is a demanding application. Running large language models locally is not lightweight work. Here is what you need:
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 16 GB | 32 GB |
| VRAM (GPU) | 10 GB — works with qwen2.5:14b | 24 GB — full qwen2.5:32b performance |
| Disk Space | 20 GB | 50 GB |
| CPU Cores | 4 cores | 8+ cores |
CPU-only mode does work but inference will be significantly slower. If you are on a lighter machine use qwen2.5:14b or qwen2.5:7b as your model. The smaller models still produce impressive simulation results — they are just less nuanced in their output.
⚡ Quick Tip: If you are on an Apple Silicon Mac with unified memory (M2 Pro or M3 with 36 GB+) you have enough memory bandwidth to run the 32b model smoothly without a discrete GPU. Ollama has excellent Metal support on macOS. Future predict ai mirofish offline the offline ai agent
Frequently Asked Questions
What is MiroFish-Offline exactly? MiroFish-Offline is a fully local open-source multi-agent simulation engine. You feed it a document and it generates hundreds of AI agents with distinct personalities that simulate public reaction on a social platform — entirely on your own hardware with no cloud APIs required.
Does MiroFish-Offline require an internet connection or API keys? No. It has zero cloud dependencies. The language model runs through Ollama locally and the knowledge graph uses Neo4j Community Edition which is also local. After you install it the tool works completely offline.
What LLM models does MiroFish-Offline support? It defaults to qwen2.5:32b through Ollama but supports any model compatible with the OpenAI API format. You can swap to qwen2.5:14b, llama3 or even a cloud provider like Claude or GPT-4 by changing a single line in the environment file.
Is MiroFish-Offline free to use? Yes. It is released under the AGPL-3.0 license which means it is fully free for personal and research use. Commercial use under AGPL-3.0 requires that derivative works also be open-source.
How is MiroFish-Offline different from other AI simulation tools? Most AI simulation tools either rely on cloud APIs or are closed-source products. MiroFish-Offline is the only known open-source tool that combines multi-agent swarm simulation, a local knowledge graph and a structured report generation pipeline — all running offline on your own hardware.
Who built MiroFish-Offline? It is an English-language fork created by GitHub user nikmcfly based on the original MiroFish project by 666ghj which was originally supported by Shanda Group. The simulation engine is also powered by the OASIS framework from the CAMEL-AI team.
Final Verdict (Mirofish Offline The Offline ai agent)
MiroFish-Offline is one of those rare open-source projects that solves a genuinely hard problem in a genuinely clever way. The idea of simulating hundreds of AI personalities responding to a document — and then being able to interview them afterward — sounds like something out of a speculative fiction novel. The fact that it all runs locally on your hardware makes it even more remarkable.
Is it perfect? No. The hardware requirements are steep for most laptop users and the Docker setup can take a while the first time you pull the models. But for a PR professional, a policy analyst, a quantitative trader or a researcher studying social dynamics this tool offers something that simply does not exist anywhere else in the open-source world.
The architectural decisions — the clean abstraction layer, the hybrid search approach and the dependency injection pattern — show that this is a project built by someone who thinks about software quality as much as they think about clever ideas. That is a rare combination and it gives MiroFish-Offline a strong foundation for continued development.
“MiroFish-Offline is not just a demo. It is production-grade infrastructure for simulating the social world — and it costs you exactly zero dollars to run.”
If you are curious about what AI agents can do when they work together in a structured simulation this is one of the best places to find out. Star the repo, spin up the Docker container and upload something that matters to you. The results have a way of being more unsettling and more insightful than you expect. The ai agent mirofish offline the offline ai agent.
⭐ View MiroFish-Offline on GitHub → github.com/nikmcfly/MiroFish-Offline
Our More Blog About Ai click here