I think this is sane washing their idea in the modern context of it having failed. I think at the time, they thought VR would be the next big thing and wanted to become the dominant player via first mover advantage.
The headsets don’t really make sense to me in the way you’re describing. Phones are omnipresent because it’s a thing you always just have on you. Headsets are large enough that it’s a conscious choice to bring it; they’re closer to a laptop than a phone.
Also, the web interface is like right there staring at them. Any device with a browser can access Facebook like that. Google/Apple/Microsoft can’t mess with that much without causing a huge scene and probably massive antitrust backlash.
I think headsets might work, but I think Meta trying to use their first mover advantage so hard so early backfired. Oculus, as a device, became less desirable after it required Facebook integration.
It's kind of like Microsoft with copilot - the idea about having an AI assistant that can help you use the computer is great. But it can't be from Microsoft because people don't trust them with that.
I think it’s reasonably easy to design a system that’s documentable and documented. It’s very, very hard to maintain and iterate on a system while maintaining those properties.
Hacky things will make their way in because it takes a month to do the documentable thing and a week to ship the hacky thing.
It takes a lot of skilled people from varying disciplines to figure out what things are going to survive long enough and be important enough to spend the resources doing the right thing instead of the hacks.
It bites both ways. I’ve seen core business products crippled by years of digital duct tape, but I’ve also seen internal tooling that never really becomes useful because they insist on doing the “correct” thing and it’s constantly a year behind what we need it to do.
Let me give you and example of what I referred to - I used to work on a big enterprise app that was a desktop app, it had a dockable very complex user interface and a quite sophisticated data management layer that you were supposed to integrate with to display and query stuff.
These core services were extremely well designed and documented, and if you wrote a component using them, you could be reasonably certain the UI displayed correctly, behaved consistently etc.
But imo even more importantly, due to the patterns these components enforced, if you wrote a component like this, chances were somebody could go in and read your code and understand what it did, and if you depended on external stuff, even then it was very clear on what and how.
A lot of the code was using this correctly, but some wasn't for whatever reason. And the ones that didn't were utter hell to work with, as they could change state any time, did things their own way, depended on implementation details etc..
Essentially had to understand the whole code base, like what services call where, in what order to things happen etc. to work with that code.
Behavior of those components was essentially 'what the code did' and therefore undocumentable.
For reasons, the amount of code belonging in the latter category grew and grew, and eventually it infected the core systems, as the invariants described by the documentation basically no longer be expected to hold in any scenario, and the whole thing became an undocumentable ball of twine, where the code did things not because it made sense, but to work around bits and quirks of other pieces of code.
There are legit reasons, at least for two nested sessions. A production network that’s airgapped except for a bastion host that acts as a gateway. It’s better than port forwarding because you have to auth to the bastion host before the RDP chaining, and it often takes separate credentials for the second RDP session.
It’s a semi-common setup for higher security environments, and when you have a network of stuff that has known vulnerabilities you can’t patch for whatever reason. Traffic in and out is super carefully firewalled. It’s not great, but it’s better than a 25 year old MySQL with a direct public IP.
> airgapped except for a bastion host that acts as a gateway
First time I've heard of an airgapped system you could access remotely. Doesn't that kind of defeat the label "airgapped"? I think I'd just call that "isolated" at that point instead.
This concept is related to PAM.
You often have to do ops on infra and need some DMZ to do the ops. In regulated industry you have to record every operations done by the person and have to follow principle of least privilege. This what should happen in an ideal world.
> You often have to do ops on infra and need some DMZ to do the ops.
This makes sense, "bastion" hosts and similar things is fairly common too. What's not common is calling those "airgapped", because they're clearly not.
Air gapped means... there is nothing except air in the gap between systems.
A physical tether would defeat it.
Now, I pedant could start talking about wifi, but air-gapping is a concept older than the internet. (It stems from plumbing where there's air that prevents back leakage of contamination).
This is not the first time I’ve seen this, and it’s misleading. Your brain needs glucose, as does the rest of your body. You do not need to eat glucose, your body can synthesize it from non-glucose sources. You can absolutely survive on a diet with 0 glucose.
I don’t have an issue with people eating sugar, but it is not a necessary nutrient.
I think you could work around send(). Not a Ruby person, but in most languages you could store functions in a hashmap, and write an implementation of send that does a lookup and invokes the method (passing the instance pointer through if need be).
Won’t work with actual class methods, but if you know ahead of time all the functions it will call are dynamic then it’s not a big deal.
I have the voice assistant on Mike hooked up to Claude and it does most of the things I’d want OpenClaw to do.
I’m not generally interested in having it read my email or calendar. I have a digital calendar in the kitchen, and I rarely get important email. I do really enjoy being able to control my house by voice in natural language. I had it set all my lights to Easter colors a while back in a single instruction.
This is a messaging issue on their part, which I think is partially intentional.
It’s not unreasonable for people to expect the most expensive subscription plan to be “premium”. That’s how it works everywhere else. They typically have better margins on the premium plans, and the monthly payment gives them reliable cash flow at that higher margin.
You’re right that that’s not true at Anthropic (or really most AI providers). You’re not even really buying tokens because you get billed whether you use it or not, the tokens don’t carry over like buying API tokens, and they get to dictate what an acceptable way to use those tokens is. They are cheaper though, assuming you actually use them. Which Anthropic et al would really prefer you didn’t.
I think any of the webhook-based providers are better, because you can isolate your secrets. PRs go to a PR webhook that runs in an environment that just doesn’t have access to any secrets.
Releases go to the release webhook, which should output nothing and ideally should be a separate machine/VM with firewall rules and DNS blocks that prevent traffic to anywhere not strictly required.
Things are a lot harder to secure with modern dynamic infrastructure, though. Makes me feel old, but things were simpler when you could say service X has IP Y and add firewall rules around it. Nowadays that service probably has 15 IP addresses that change once a week.
The information asymmetry is sort of wild to me. I can't really figure out an angle why this isn't bad for the same reasons insider trader is bad. I'm even okay with them having the info, I just think companies should be required to publish everyone's salaries. Redacting names is fine, but I should be able to look up the team I would join and see titles and salaries for everyone on the team.
> They want actual season-long fans, so now if you transfer too many games they can track it and ban you. This is essentially anti-scalping. There's a legit justification.
This doesn't track to me. I can send someone else my QR code to use without actually transferring the tickets to them unless they're checking ID, and if they're checking ID then it doesn't matter whether the tickets are paper or digital.
I can't really see a way that digital tickets prevent something paper ones don't.
> At some point, you have to cut off previous technologies because virtually everyone's moved to something better. You also can't buy tickets any more by snail mail with an enclosed check.
That happens way less often than you'd think. I can still ride a horse on the road, I can still heat/cook with wood, I can still call customer support on a landline, I can still use email over landline. There are tons of things that were superseded decades ago that we still support.
It's certainly their choice to make (unless someone can make an ADA complaint or some kind of age discrimination case) but it seems like a shitty thing to do. If he can't use a computer or cellphone, they're clearly willing to _sell_ him tickets non-digitally like at a ticketing counter. Throw a cheap printer behind that counter and have the employee print them off. With the amount this guy is spending for tickets he'd probably buy the printer for them.
The headsets don’t really make sense to me in the way you’re describing. Phones are omnipresent because it’s a thing you always just have on you. Headsets are large enough that it’s a conscious choice to bring it; they’re closer to a laptop than a phone.
Also, the web interface is like right there staring at them. Any device with a browser can access Facebook like that. Google/Apple/Microsoft can’t mess with that much without causing a huge scene and probably massive antitrust backlash.
reply