Hacker Newsnew | past | comments | ask | show | jobs | submit | johntash's commentslogin

> You need all those speakers? One for each room?

Not op, but one reason I have a speaker in _almost_ every room is for notifications. Simple stuff like "the garage door is still open" or weather alerts, etc. I rarely actually play music on all of them.


> what is it with C level executives and AI?

My assumption is that they are extremely excited about AI, because they are also extremely excited about being able to reduce their workforce while expecting more output from smaller teams.


I think that most of them are fundamentally bullshitters. Not all, but bullshitting while not knowing is what allows you to raise money and look confident.

LLM is bullshitter too, they assume everybody else does the same, so LLM does everything they think everyone does.


If you haven't used it before, give it a shot. Worst case you waste a few years of your life.

Doom emacs and Spacemacs are both good starter kits to give you an idea of what you could do.


> Worst case you waste a few years of your life.

Yeah, come on. NBD yo.

;-)


Matter more to who though? Probably not the people making most of the profits :)

I sorta get that reasoning, but is a 24 hour cooldown really going to stop scammers? They're already used to multi-day scams, so wouldn't they just say they'll call back in a day to finish the process?

Yup. The specific scam here is built upon preventing the victim from talking to trusted individuals. A cooldown breaks the spell.

Complex, multi-day pig butchering stuff is not what Google is going after here or would have any hope to defeat. But they can deal with banking malware.


I like that this is an irc->matrix bridge instead of a matrix->bridge. I honestly don't use either much anymore, but I'll probably give it a try.

I didn't know irssi was still maintained in 2026. There hasn't been a release in a while, but there are at least a few recent-ish commits. Has everyone moved on from irssi to weechat?


> apply my database migrations and populate the database with static datasets before I deploy my application

You could a) have the app acquire a lock in the db and do its own migrations, or b) create a k8s job that runs the migration tool, but make sure the app waits for the schema to be updated or at least won't do anything bad.


> https://depsguard.com if anyone is interested.

I really appreciate that you didn't include a "curl | bash" command to paste for installing, but at the same time it's what I was expecting when I clicked.

I'm pretty sure I saw a comment on HN where the user wrapped all of their npm/pip/etc commands with bubblewrap. I've been thinking of doing something similar and basically just seeing how many of my daily commands I can sandbox. My hunch is that _most_ of them don't need to operate outside the current directory and don't need internet access.


Yeah, I really wanted to avoid anything "| bash"... appreciate you noticed! (although downloading any binary is also risky, I think just making this a standalone python script would be a better idea for the next version)

How do you keep llms from writing _too much_? I've built a few similar tools and systems, and they're all way too easy for the llm to just keep documenting things to the point the whole system is a mess and becomes less useful the bigger it gets.

One example experiment I had was seeing if I could get a llm to build its own knowledge wiki where I would paste a few links and have it go do some research on whatever the subject was, then distill what it finds into specific wiki pages with links to other pages or the source refs. It looked good until you read the actual data.

This was a few years ago though, so maybe it's worth trying again with something like opus 4.7.


Do you use local models for these, or are you okay with giving private details to anthropic/openai?

(that's one of my biggest hurdles for really adopting any useful assistant type of agent)


I want to use local LLMs, and in fact I have enough VRAM (12GB) and RAM (96GB) to do it but I gave up because it was pretty buggy with the Gemma 4 26B (A4B?) Q4 models. It also meant I had to give up local voice transcription because I needed all my VRAM dedicated to the LLM.

The other thing is I will ask an agent via Telegram to code stuff, so I want an agent that is smart enough to do it all. I prefer brute forcing with money right now. I hate when LLM make bizarre mistakes, I end up spending way too much time figuring out the issue.

I use Openrouter, so hopefully no one has built a perfect replica of me in their storage. I flip between models too.

But to be clear, I am living dangerously with agentic workflows in general. Haven't been burnt yet (other than accidentally running up a huge Gemini bill which made me switch to Codex Oauth and Openrouter for cheap Minimax 2.7)

I am moving to a commander/orchestrator model to use both frontier and cheap models and eventually a better local LLM once I buy a 5070 Ti, 3090, 64GB Mac M1 Max, 128GB Strix Halo (probably missed that train) or the AMD R9700.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: