A lightweight pipeline for extraction, chunking, embeddings, and search.
distill_rag is a small but powerful toolkit that helps you transform messy text sources (HTML pages, transcripts, articles, archives) into clean, structured data that AI models can learn from.
Itβs designed for people who want to build:
If youβve ever tried to fine-tune a model and realised the hardest part is actually preparing the dataset β this toolkit is for that problem.
Training or distilling a specialised AI model requires clean, coherent, well-structured data. But most text online is:
Before you can train a model, you need a pipeline that turns raw text into polished, training-ready data.
distill_rag gives you that pipeline. It helps you:
This structure mirrors the data format most distillation workflows expect.
Goal: Make it easy for researchers and builders to create high-quality domain-specific AI models.
Built entirely in Node.js, distill_rag leverages async promises and lightweight concurrency to deliver blazing-fast performanceβoften 5β10Γ faster than equivalent Python-based tools like LlamaIndex, LangChain, or Haystack. This makes it ideal for local workflows on consumer hardware, where you can process thousands of chunks in minutes without heavy dependencies or complex setups.
Feed it a folder of HTML (scraped, archived, downloaded). It removes noise and produces structured JSON sessions.
Text is broken into long, context-rich chunks (ideal for distillation).
Each chunk is embedded using a local model like mxbai-embed-large.
Chunks are stored in Elasticsearch as vectors + metadata. You can then run semantic search to retrieve relevant material.
distill_rag/
βββ data_extraction/
β βββ clean_html.js # strip noise safely
β βββ extractor.js # extract Q/A style turns
β βββ convert_raw_to_sessions.js # HTML β structured JSON
β βββ walk_and_extract.js # CLI to batch-convert directories
β
βββ indexing/
β βββ index_distill_chunks.js # long-chunk indexer
β βββ rebuild_distill_index.sh # wipe + rebuild helper
β
βββ search/
β βββ search_distill_chunks.js # BM25 / vector / hybrid search
β βββ search_cli.js # CLI search tool
β
βββ tests/ # jest-based automated test suite
β
βββ prompts/ # optional prompt templates
βββ shared/ # shared utilities
βββ cleanup.sh # remove build artefacts
βββ jest.config.js
βββ package.json
βββ README.md
Requirements:
mxbai-embed-large)Install:
npm install
Convert a directory:
node data_extraction/walk_and_extract.js raw_html/ extracted_sessions/
Output example:
{
"title": "example.html",
"turns": [
{ "role": "user", "content": "Q: What is service?" },
{ "role": "assistant", "content": "Service begins with kindness." }
]
}
Behind the scenes it:
user = first block, rest assistant)Rebuild the full index:
bash rebuild_distill_index.sh
Manual index build:
ES_DISTILL_INDEX=quo_distill_index \
JSON_DIR=./extracted_sessions \
ELASTICSEARCH_NODE=http://localhost:9200 \
node indexing/index_distill_chunks.js
The indexer:
creates long semantic chunks (5000β9000 characters)
calls your embedding API (/api/embeddings)
indexes all chunks with metadata:
titlesession_datesourcechunk_indexembedding (vector)distill_rag supports three complementary search strategies via search/search_distill_chunks.js:
Classic lexical search. Good for names, citations, exact phrases.
const { searchBM25 } = require("./search/search_distill_chunks");
const results = await searchBM25("service to others", 5);
console.log(results);
Semantic similarity using your local embedding model.
const { searchVector } = require("./search/search_distill_chunks");
const results = await searchVector("how to grow spiritually", 5);
console.log(results);
State-of-the-art fusion of:
This gives robust results even on noisy or varied datasets.
const { searchHybrid } = require("./search/search_distill_chunks");
const results = await searchHybrid("balance love and wisdom", 5);
console.log(results);
You can run searches directly from the terminal:
node search/search_cli.js "service to others"
Specify mode (bm25, vector, hybrid) and k:
node search/search_cli.js "healing catalyst" hybrid 8
node search/search_cli.js "unity" bm25 5
node search/search_cli.js "wisdom" vector 10
This prints:
Run all tests:
npm test
Covers:
npm run clean
or:
bash cleanup.sh
Config is handled via environment variables:
| Variable | Default | Purpose |
|---|---|---|
ELASTICSEARCH_NODE |
http://localhost:9200 |
ES cluster URL |
ES_DISTILL_INDEX |
quo_distill_index |
Target index |
EMBED_URL |
http://localhost:11434/api/embeddings |
Embedding API |
EMBED_MODEL |
mxbai-embed-large |
Embedding model |
CHUNK_MIN |
5000 |
Minimum chunk size (characters) |
CHUNK_MAX |
9000 |
Maximum chunk size (characters) |
JSON_DIR |
(required) | Directory of session JSON |
Apache 2.0 (see LICENSE).
Contributions are welcome β bug fixes, new extractors, support for other embedding backends, indexing strategies, documentation.
This project is meant to empower people building truth-aligned, service-oriented models. If it helps someone create a clearer dataset or a kinder AI, itβs doing its job.