thickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 17 days agoSoon you will be able to run LLMs natively in dockerplus-squarewww.docker.comexternal-linkmessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkSoon you will be able to run LLMs natively in dockerplus-squarewww.docker.comthickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 17 days agomessage-square1fedilink
minus-squarethickertoofan@lemm.eeOPtoLocalLLaMA@sh.itjust.works•SpatialLM, a 1B model capable of spatial identification, using 3d point cloud data. The video demo is amazing.linkfedilinkEnglisharrow-up0·17 days agoI think the bigger bottleneck is SLAM, running that is intensive, it wont directly run on video, and SLAM is tough i guess, reading the repo doesn’t give any clues of it being able to run on CPU inference. linkfedilink
thickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 17 days agoSpatialLM, a 1B model capable of spatial identification, using 3d point cloud data. The video demo is amazing.plus-squaremanycore-research.github.ioexternal-linkmessage-square5fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkSpatialLM, a 1B model capable of spatial identification, using 3d point cloud data. The video demo is amazing.plus-squaremanycore-research.github.iothickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 17 days agomessage-square5fedilink
minus-squarethickertoofan@lemm.eeOPtoLocalLLaMA@sh.itjust.works•Microsoft KBLAMlinkfedilinkEnglisharrow-up0·18 days agoThere is a repo they released. linkfedilink
thickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 19 days agoMicrosoft KBLAMplus-squarewww.microsoft.comexternal-linkmessage-square3fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkMicrosoft KBLAMplus-squarewww.microsoft.comthickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 19 days agomessage-square3fedilink
minus-squarethickertoofan@lemm.eeOPtoLocalLLaMA@sh.itjust.works•Loaded benchmark for 1-3-4-7b models?linkfedilinkEnglisharrow-up0·20 days agoI checked mostly all of em out from the list, but 1b models are generally unusable for RAG. linkfedilink
minus-squarethickertoofan@lemm.eeOPtoLocalLLaMA@sh.itjust.works•Loaded benchmark for 1-3-4-7b models?linkfedilinkEnglisharrow-up0·22 days agoi use pageassist with Ollama linkfedilink
thickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 23 days agoLoaded benchmark for 1-3-4-7b models?plus-squaremessage-squaremessage-square4fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareLoaded benchmark for 1-3-4-7b models?plus-squarethickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 23 days agomessage-square4fedilink
thickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 25 days agoGemma 3 1B and 3B result on a "needle in a haystack" like test ran locallyplus-squaremessage-squaremessage-square0fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareGemma 3 1B and 3B result on a "needle in a haystack" like test ran locallyplus-squarethickertoofan@lemm.ee to LocalLLaMA@sh.itjust.worksEnglish · 25 days agomessage-square0fedilink
minus-squarethickertoofan@lemm.eetoFediverse@lemmy.world•Post promoting lemm.ee hits 67,000 views and 1000 upvotes in 3 hours.linkfedilinkEnglisharrow-up4·27 days agoWelcome linkfedilink
I think the bigger bottleneck is SLAM, running that is intensive, it wont directly run on video, and SLAM is tough i guess, reading the repo doesn’t give any clues of it being able to run on CPU inference.