I broadly agree with Elliot.
Local LLMs are great if you can afford the hardware to run them. Anything you can run on a CPU is either dumb as rocks or has super limited context. Apparently some of the higher end Macs work well (like mac studio level) but assuming you’re on a budget, I haven’t found them to be that usable. Other local AI models (not LLMs) like whisper for transcription are great. you can also run decent TTS models locally, or image gen.
OpenClaw is… I think OpenClaw is bad. The idea isn’t necessarily, but I tried OpenClaw specifically and it is slow, unresponsive, buggy, and all the magic stuff people were posting about at the end of Jan didn’t happen for me. I needed to direct it a lot more (when I was running an actual OpenClaw instance), which seemed pretty common. It seems like a lot of people have now realized that the prompting was more the magic of openclaw than the software itself (so stuff like moltbook was more like a weird game of chinese whispers).
Agents are useful enough for stuff like openclaw to be useful – though use a better harness; every vibe coder and their dog has one, like here’s mine that i haven’t released. you can look up nanoclaw and picoclaw and nullcalw and coleslaw… (there’s so many but I made coleslaw up, idk if anyone made one with that name yet).
WRT security, personal AI assistants sit in the middle of a bunch of red flags: unrestricted access to personal data, processing unsanitized data, internet access, plaintext credentials, bad inspection/telemetry (it’s meant to mostly hide that from you), etc.
Did you see this story? This kinda sums up how well they work if you let them run wild.