Things I couldn’t find elsewhere

LLM

Create your own locally hosted family AI assistant

What you’re seeing in this picture is a screenshot from our “family chat”. It’s a locally hosted Matrix server, with Element clients on all the computers, phones and tablets in the family. Fully End2End encrypted of course - why should our family discussions end up with some external party? You’re also seeing “Karen”, our family AI, taking part in the discussions with some helpful input when so prompted. Karen is based on the LLaMa 13b 4-bit GPTQ locally hosted LLM (Large Language Model) I mentioned in a previous post.

AI · English · LLaMa · LLM · Matrix · Uncategorized

3 minutes

The delta between an LLM and consciousness

With Facebook’s release of LLaMa, and the subsequent work done with its models by the open community, it’s now possible to run a state of the art “GPT-3 class” LLM on regular consumer hardware. The 13B model, quantized to 4-bit, runs fine on a GPU with ~9GB free VRAM. I spent 40 minutes chatting with one yesterday, and the experience was almost flawless. So why is Troed playing around with locally hosted LLM “chatbots”?

AGI · AI · English · LLM · Transhumanism · Uncategorized

3 minutes