Expose the API to the outside world using tailscale. Run your telegram bot on n8n or windmill.dev. You can absolutely use an LLM, both n8n and windmill.dev support AI agentic workflows. google "n8n LLM telegram bot" and you'll find tons of examples.
> Very honest question: One of the use cases I had with OpenClaw that I'm missing now that I don't use it: I could tell it (via Telegram) to add something to my TODO list at home while I'm in the office. It would call a custom API I had set up that adds items to my TODO list. How can I replicate this without the hassle of setting up OpenClaw? How would you do it?
What you are looking for is an orchestration platform such as n8n or windmill.dev. You can still have a telegram bot and still use LLM for natural language interaction, but it's much more controlled than OpenClaw. I do exactly what you describe, add todos to my todoist account from telegram.
When I meet people in VR who are ESL, I can tell based on their accent and mannerisms that they learned English by playing video games with westerners or watched a lot of YouTube.
Do we really want to dilute the uniqueness of language by making everyone sound like they came out of a lab in California?
>Do we really want to dilute the uniqueness of language
I can't speak to whether it's desirable or not, but this has been happening with the advent of radio, movies, and television for over a century. So, are we worse off now, linguistically-speaking, than then? Do we really even notice missing accents if we never grew up with them?
language learning also works fine without emotionality faking, and is depending more on authentic speech recognition (e.g. you want the model to notice if you mispronounce important words, not gloss over it and just continue babble as otherwise this will bite you in the ass in the real world) as well as the system's overall specific ability to generate a personal learning curriculum.
Microsoft came up with some objectively good products both for consumers and developers in the recent decade. For consumers, Xbox would be the biggest one, and for developers, VSCode, WSL/WSL2, Azure.
IPv6 adoption will take place overnight when either google chrome, Android or iOS start showing a warning on IPv4-only networks. ISPs and tech companies will start to get flooded with support calls asking about it and will choose to roll out IPv6 to make the problem go away.
Chrome forced the web to go 100% https, the same thing will happen eventually with IPv6.
In practice the tech giants such as Google, Apple and Microsoft will dictate adoption of technology. When Chrome starts mandating or heavily recommending IPv6, adoption will reach 99% overnight. That's what happened with https: https://www.znetlive.com/blog/google-chrome-68-mandates-http...
I went through this around a year ago. I wanted to postgres for django apps, and I didn't want to pay the insane prices required by cloud providers for a replicated setup. I wanted a replicated setup on hetzner VMs and I wanted full control over the backup process. I wanted the deployment to be done using ansible, and I wanted my database servers to be stateless. If you vaporize both my heztner postgres VMs simultaneously, I lose one minute of data. (If I just lose the primary I probably lose less than a second of data due to realtime replication).
I'll be honest it's not documented as well as it could, some concepts like the archive process and the replication setup took me a while to understand. I also had trouble understanding what roles the various tools played. Initially I thought I could roll my own backup but then later deployed pgBackrest. I deployed and destroyed VMs countless times (my ansible playbook does everything from VM creation on proxmox / hetzner API to installing postgres, setting up replication).
What is critical is testing your backup and recovery. Start writing some data. Blow up your database infra. See if you can recover. You need a high degree of automation in your deployment in order to gain confidence that you won't lose data.
My deployment looks like this:
* two Postgres 16 instances, one primary, one replica (realtime replication)
* both on Debian 12 (most stable platform for Postgres according to my research)
* ansible playbooks for initial deployment as well as failover
* archive file backups to rsync.net storage space (with zfs snapshots) every minute
* full backups using pgBackrest every 24hrs, stored to rsync.net, wasabi, and hetzner storage box.
As you can guess, it was kind of a massive investment and forced me to become a sysadmin / DBA for a while (though I went the devops route with full ansible automation and automated testing). I gained quite a bit of knowledge which is great. But I'll probably have to re-design and seriously test at the next postgres major release. Sometimes I wonder whether I should have just accepted the cost of cloud postgres deployments.
I've got a less robust version of this (also as Ansible -> Hetzner) that I've toyed with. I'm often tempted to progress it, but I've realized it is a distraction. I say that about me, and not too negatively. I know that I want to get some apps done, but the sysadmin-y stuff is pretty fun and alluring but it can chew up a lot of time.
Currently I'm viewing the $19 plan from Neon as acceptable (I just look in my Costco cart for comparison) for me now. Plus, I'm getting something for my money beyond not having to build it myself: branching. This has proved way handier than I'd expected as a solo dev and I use it all the time. A DIY postgres wouldn't have that, at least not as cleanly.
If charges go much beyond the $19 and it is still just me faffing about, I'll probably look harder at the DIY PG. OTOH if there is good real world usage and/or $ coming in, then it's easier to view Neon as just a cost of business (within reason).
The server-side store a full plain text archive with government access is by design.
the weak encryption is NOT by design. It's due to incompetent programmers.
reply