• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: September 14th, 2023

help-circle

  • ralakus@lemmy.worldto3DPrinting@lemmy.world3D Printing is Fun!
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 months ago

    I’ve a massive blob like this one time when the nozzle got clogged and the extruder created enough pressure to push the filament through the threads of the hotend block. It was on an Anet A8 and I ripped a lead off the thermistor trying to get the plastic off so I ended up replacing the entire hotend.

    You can try to heat up the hotend to a fair bit under the melting point of the filament to where it’s soft and somewhat pliable but not runny or sticky and then trying to peel it off. Though you’d risk damaging any leads to the thermistor, heater, or your hands if you’re not careful.

    Good luck on fixing the printer and getting back to printing again. 3D printing is a really time consuming hobby


  • If you’re using an LLM, you should limit the output via a grammar to something like json, jsonl, or csv so you can load it into scripts and validate that the generated data matches the source data. Though at that point you might as well just parse the raw data and do it yourself. If I were you, I’d honestly use something like pandas/polars or even excel to get it done reliably without people bashing you for using the forbidden technology even if you can 100% confirm that the data is real and not hallucinated.

    I also wouldn’t use any cloud LLM solution like OpenAI, Gemini, Grok, etc. Since those can change and are really hard to validate and give you little to no control of the model. I’d recommend using a local solution like running an open weight model like Mistral Nemo 2407 Instruct locally using llama.cpp or vLLM since the entire setup will not change unless you manually go in and change something. We use a custom finetuned version of Mixtral 8x7B Instruct at work in a research setting and it works very well for our purposes (translation and summarization) despite what critics think.

    Tl;dr Use pandas/polars if you want 100% reliable (Human error not accounted). LLMs require lots of work to get reliable output from

    Edit: There’s lots of misunderstanding for LLMs. You’re not supposed to use the bare LLM for any tasks except extremely basic ones that could be done by hand better. You need to build a system around them for your specific purpose. Using a raw LLM without a Retrieval Augmented Generation (RAG) system and complaining about hallucinations is like using the bare ass Linux kernel and complaining that you can’t use it as a desktop OS. Of course an LLM will hallucinate like crazy if you give it no data. If someone told you that you have to write a 3 page paper on the eating habits of 14th century monarchs in Europe and locked you in a room with absolutely nothing except 3 pages of paper and a pencil, you’d probably write something not completely accurate. However, if you got access to the internet and a few databases, you could write something really good and accurate. LLMs are exceptionally good at summarization and translation. You have to give them data to work with first.