

3·
1 month agoLook into setting up the “continue” plugin in vs code. It supports an ollama backend and can even do embeddings if setup correctly. That means it will try to select files itself based on your question which helps with prompt size. Here is a link to get started, you might need to choose smaller models with your card.
Openly available traffic data that follows a reliable standard.