Plan migration to local Ollama embedding model #4

Open
opened 2026-03-21 00:12:39 +01:00 by martin · 0 comments
Owner

Once the LLM server (RTX 3060, Ollama) is stable, plan migration from Copilot text-embedding-3-small to a locally-served model (e.g. nomic-embed-text). Must fit in 16GB VRAM alongside Qwen2.5-14B.

Once the LLM server (RTX 3060, Ollama) is stable, plan migration from Copilot `text-embedding-3-small` to a locally-served model (e.g. `nomic-embed-text`). Must fit in 16GB VRAM alongside Qwen2.5-14B.
martin added this to the Phase 0 — Infrastructure Bootstrap milestone 2026-03-21 00:12:39 +01:00
martin added the infraembedding labels 2026-03-21 00:12:39 +01:00
martin added this to the Road to Pompeo project 2026-03-21 00:18:29 +01:00
martin self-assigned this 2026-03-21 19:37:01 +01:00
martin removed their assignment 2026-03-21 20:39:43 +01:00
martin self-assigned this 2026-03-21 20:40:02 +01:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: martin/Alpha#4