What can I use for an offline, selfhosted LLM client, pref with images,charts, python code execution
from catty@lemmy.world to selfhosted@lemmy.world on 13 Jun 06:44
https://lemmy.world/post/31307176

I was looking back at some old lemmee posts and came across GPT4All. Didn’t get much sleep last night as it’s awesome, even on my old (10yo) laptop with a Compute 5.0 NVidia card.

Still, I’m after more, I’d like to be able to get image creation and view it in the conversation, if it generates python code, to be able to run it (I’m using Debian, and have a default python env set up). Local file analysis also useful. CUDA Compute 5.0 / vulkan compatibility needed too with the option to use some of the smaller models (1-3B for example). Also a local API would be nice for my own python experiments.

Is there anything that can tick the boxes? Even if I have to scoot across models for some of the features? I’d prefer more of a desktop client application than a docker container running in the background.

#selfhosted

threaded - newest

just_another_person@lemmy.world on 13 Jun 06:55 next collapse

Would like fries or a jetpack with that?

bjoern_tantau@swg-empire.de on 13 Jun 07:09 next collapse

!localllama@sh.itjust.works

andrew0@lemmy.dbzer0.com on 13 Jun 07:20 next collapse

Ollama for API, which you can integrate into Open WebUI. You can also integrate image generation with ComfyUI I believe.

It’s less of a hassle to use Docker for Open WebUI, but ollama works as a regular CLI tool.

O_R_I_O_N@lemm.ee on 13 Jun 07:29 next collapse

ChainLit is a super ez UI too. Ollama works well with Semantic Kernal (for integration with existing code) and langChain (for agent orchestration). I’m working on building MCP interaction with ComfyUI’s API, it’s a pain in the ass.

muntedcrocodile@hilariouschaos.com on 14 Jun 01:23 next collapse

This is what I do its excellent.

catty@lemmy.world on 14 Jun 07:43 collapse

But won’t this be a mish-mash of different docker containers and projects creating an installation, dependency, upgrade nightmare?

andrew0@lemmy.dbzer0.com on 14 Jun 15:22 collapse

All the ones I mentioned can be installed with pip or uv if I am not mistaken. It would probably be more finicky than containers that you can put behind a reverse proxy, but it is possible if you wish to go that route. Ollama will also run system-wide, so any project will be able to use its API without you having to create a separate environment and download the same model twice in order to use it.

breadsmasher@lemmy.world on 13 Jun 07:24 next collapse

AUTOMATIC1111?

github.com/AUTOMATIC1111/stable-diffusion-webui

catty@lemmy.world on 13 Jun 08:28 next collapse

I’ve discovered jan.ai which is far faster than GPT4All, and visually a little nicer.

EDIT: After using it for an hour or so, it seems to crash all the time, I keep on having to reset it, and currently am facing it freezing for no reason.

otacon239@lemmy.world on 13 Jun 08:35 next collapse

I also started using this recently and it’s very plug and play. Just open and run. It’s the only client so far that feels like I could recommend to non-geeks.

catty@lemmy.world on 13 Jun 08:52 collapse

I agree. it looks nice, explains the models fairly well, hides away the model settings nicely, and even recommends some initial models to get started that have low requirements. I like the concept of plugins but haven’t found a way to e.g. run python code it creates yet and display the output in the window

voidspace@lemmy.world on 13 Jun 13:42 collapse

Took ages to produce answer, and only worked once on one model, then crashed since then.

catty@lemmy.world on 13 Jun 14:15 collapse

Try the beta on the github repo, and use a smaller model!

ViatorOmnium@piefed.social on 13 Jun 08:38 next collapse

The main limitation is the VRAM, but I doubt any model is going to be particularly fast.

I think phi3:mini on ollama might be an okish fit for python, since it's a small model, but was trained on python codebases.

catty@lemmy.world on 13 Jun 08:54 collapse

I’m getting very-near real-time on my old laptop. Maybe a delay of 1-2s whilst it creates the response

hendrik@palaver.p3x.de on 13 Jun 11:13 next collapse

Maybe LocalAI? It doesn't do python code execution, but pretty much all of the rest.

catty@lemmy.world on 13 Jun 14:44 collapse

This looks interesting - do you have experience of it? How reliable / efficient is it?

hendrik@palaver.p3x.de on 13 Jun 17:16 next collapse

I think many people use it and it works. But sorry - no, I don't have any first-hand experience. I've tested it for a bit and it looked fine. Has a lot of features and it should be as efficient as any other ggml/llama.cpp based inference solution at least for text. I myself use KoboldCPP for the few things I do with AI and my computer is lacking a GPU so I don't really do a lot of images with software like this. And it's likely going to be less for you than the 15 minutes it takes me to generate an image on my unsuited machine.

mitexleo@buddyverse.one on 14 Jun 00:10 collapse

LocalAI is pretty good but resource-intensive. I ran it on a vps in the past.

TMP_NKcYUEoM7kXg4qYe@lemmy.world on 13 Jun 11:53 next collapse

You can tell Open Interpreter to run commands based on you human-language input. If you want local only LLM, you can pair it with Ollama. It works for “interactive” use where you’re asked for confirmation before a command is run.

I set this up in a VM because I wanted a full automatic coding “agent” which can run commands without my intervention and I did not want it to blow up main system. It did not really work though because as far as I know Open Interpreter does not have a way to “pipe” a command’s output back into the LLM so that it could create feedback with linters and stuff.

Another issue was that Starcoder2, which is the only LLM trained on permissive licensed code I could find, only has a 15B “human-like” model. The smaller models only speak code so I don’t know how that would work for agentic usage and the 15B is really slow running on DDR4 CPU. I think agents are cool though so I would like to try Aider which is a supposedly good open source agent and unlike Open Interpreter is not abandonware.

Thanks for coming to my blabering talk, hope this might be useful for someone.

mitexleo@buddyverse.one on 14 Jun 00:08 next collapse

You should try cherry-ai.com … It’s the most advanced client out there. I personally use Ollama for running the models and Mistral API for advnaced tasks.

mitexleo@buddyverse.one on 14 Jun 00:09 collapse

You should try cherry-ai.com … It’s the most advanced client out there. I personally use Ollama for running the models and Mistral API for advnaced tasks.

mitexleo@buddyverse.one on 14 Jun 00:09 next collapse

It’s fully open source and free (as in beer).

catty@lemmy.world on 14 Jun 07:42 collapse

But its website is Chinese. Also what’s the github?

happinessattack@lemmy.world on 14 Jun 16:27 collapse