Things I couldn’t find elsewhere

LLaMa

Create your own locally hosted family AI assistant

What you’re seeing in this picture is a screenshot from our “family chat”. It’s a locally hosted Matrix server, with Element clients on all the computers, phones and tablets in the family. Fully End2End encrypted of course - why should our family discussions end up with some external party? You’re also seeing “Karen”, our family AI, taking part in the discussions with some helpful input when so prompted. Karen is based on the LLaMa 13b 4-bit GPTQ locally hosted LLM (Large Language Model) I mentioned in a previous post.

AI · English · LLaMa · LLM · Matrix · Uncategorized

3 minutes