Seamlessly build your public knowledge graph—without compromising on privacy.
CognifyCL is a privacy-conscious, developer-friendly system for turning browser activity into a structured, queryable archive. It captures visited URLs via a Chrome extension, persists them through a local sync bridge, and surfaces them in a Next.js web interface that supports on-device summarization using llama.cpp.
Rather than routing your data through third-party APIs or cloud services, CognifyCL runs everything locally: data collection, storage, visualization, and LLM inference. A fixed category graph provides intuitive structure, while a split-pane layout enables seamless reading and summarization workflows. It's ideal for personal knowledge management, research traceability, or anyone looking to build a local-first toolchain for information curation.
- Local-Only Architecture: No cloud services. All processing stays on your machine.
- Automatic Link Capture: Chrome extension records every page you visit.
- Fixed Category Graph: Links are displayed in a predefined graph of topics.
- Split-View Interface: Read content and view summaries side by side.
- On-Demand LLM Summarization: Summaries are generated using a local TinyLlama model.
-
Chrome Extension Captures visited URLs and sends them to a local sync server.
-
Sync Bridge (Node.js/Express) Accepts and stores entries (
url,title,summary,category) inentries.json. -
Web Application (Next.js) Displays links in a fixed graph layout with an interactive UI for summaries.
-
Summarization Engine Uses llama.cpp with TinyLlama to run local LLM inference.
git clone https://github.com/PranavPipariya/CognifyCL.git
cd CognifyCLcd webapp
npm installcd ../sync-bridge
npm installInside CognifyCL run:
mkdir llama-engine
cd llama-enginegit clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpponce you're in /CognifyCL/llama-engine/llama.cpp, then run:
mkdir build
cd build
cmake .. -DLLAMA_METAL=ON
cmake --build . --config Releasethen again go to llama.cpp directory i.e /CognifyCL/llama-engine/llama.cpp by doing:
cd ..then inside llama.cpp run:
mkdir -p models/gemma
cd models/gemmaplace the model you downloaded in gemma. do not forget to rename it as tinyllama-chat.gguf !
thats it now your summarisation feature will work.
-
Download the TinyLlama model:
-
Move the downloaded file to:
llama-engine/llama.cpp/models/gemma/tinyllama-chat.gguf
cd sync-bridge
npm run startRuns at: http://localhost:3001
cd ../webapp
npm run devRuns at: http://localhost:3000
- Open
chrome://extensions/ - Enable Developer Mode
- Click Load unpacked
- Select the
extension/directory
-
As you browse, the extension captures each link.
-
Visit
http://localhost:3000/dashboardto explore your history. -
The dashboard presents a fixed category graph layout.
-
Click a category to view links under it.
-
Click a link to open a split view:
- Right: the full web page
- Left: a panel of category cards
-
Click a card to trigger a local LLM summary via TinyLlama.
-
LLM not responding:
- Verify the model is named
tinyllama-chat.ggufand located in:llama-engine/llama.cpp/models/gemma/tinyllama-chat.gguf
- Verify the model is named
- llama.cpp — lightweight, high-performance local inference engine
- Next.js — full-stack React framework for the frontend
- TheBloke — provider of quantized TinyLlama GGUF models