AWDL is such an amazing technology, it's understandable that Apple wants to keep it only for their devices as it gives them a noticeable advantage for quick stuff sharing.
Google's QuickShare contains a reverse-engineered AWDL implementation that works on Pixel and a few other phones.
As for WiFi NAN: support for it seems rather limited outside of iOS and Android. From what I can tell the feature is barely tested on Linux and I can't find any generic Windows APIs for it either.
I've definitely used STA and AP modes concurrently on my Windows laptop with the operating system's built-in internet connection sharing function to help troubleshoot a problem in the field.
That was around a decade ago. It didn't take any extra effort on my part; I just told it to do the thing, and then it did that thing.
Researched this for a bit: there is some hardware for PCI supporting it, but Windows 10/11 not, and Linux is still work in progress, so no real support on OS level, only for some iOS/Android devices.
it might be interesting to use unused or extra wifi cards to support this. My pc motherboard has both wifi and ethernet and I only use the ethernet. That card does absolutely nothing at all.
We have been using AWDL intensely (not via AirDrop but Network.framework) for 6+ years and it fails less than 5% of the time. It's pretty impressive for a non-connected link between devices. The most common problems we face are very high device density places (100+ device in 30sqm space) and device wandering out of reach quickly (sometime as low as 5m).
The law is not about you, but about everyone:
1) Apple doesn't have service centers everywhere: some countries/cities/small towns don't have them
2) Apple doesn't provide service for older devices
3) making it easier doesn't mean you'll be able to swap them live as we did in the 90s, but it means you could do it at home with a reasonable set of tools instead of sending the device to some shop that would need to unglue, unsolder, ...
I'm assuming there were transaction IDs provided that can be given to the processor. If they can't do anything with the IDs, then that's a pretty broken system.
You are missing one important part: maintenance. While on a managed service, dozens of hours of maintenance are done by someone, when you are self-hosting, you'll be doing 3 times that, because you can't know all the details of making so many tools work, because each tool will have to be upgraded at some point and the upgrade will fail, because you have to test you backups, and many many more things to do in the long run.
So yeah, it's fun. But don't under-estimate that time, it could easily be your time spent with friend or family.
Keeping services running is fairly trivial. Getting to parity with the operationalization you get from a cloud platform takes more ongoing work.
I have a homelab that supports a number of services for my family. I have offsite backups (rsync.net for most data, a server sitting at our cottage for our media library), alerting, and some redundancy for hardware failures.
Right now, I have a few things I need to fix:
- one of the nodes didn't boot back up after a power outage last fall; need to hook up a KVM to troubleshoot
- cottage internet has been down since a power outage, so those backups are behind (I'm assuming it's something stupid, like I forgot to change the BIOS to power on automatically on the new router I just put in)
- various services occasionally throw alerts at me
I have a much more complex setup than necessary (k8s in a homelab is overkill), but even the simplest system still needs backups if you care at all about your data. To be fair, cloud services aren't immune to this, either (the failure mode is more likely to be something like your account getting compromised, rather than a hardware failure).
You're spending that much time on it because you're doing too much. Your use of the term "homelab" is telling. I have:
* A rented VPS that's been running for ~15 years without any major issues, only a couple hours a month of maintenance.
* A small NUC-like device connected to the TV for media. Requires near-zero maintenance.
* A self-built 5-drive NAS based around a Raspberry Pi CM4 with a carrier board built for NAS/networking uses. Requires near-zero maintenance.
* A Raspberry Pi running some home automation stuff. This one requires a little more effort because the hardware it talks to is flaky, as is some of the software, so maybe 2-3 hours a month.
The basics (internet access itself) are just a commodity cable modem, a commodity router running a manufacturer-maintained OpenWRT derivative, a pair of consumer-grade APs reflashed with OpenWRT, and a few consumer-grade switches. There's no reason for me to roll my own here, and I don't want to be on the hook for it when it breaks. And if any of the stuff in the bulleted list breaks, it can sit for days or weeks if I don't feel like touching it, because it's not essential.
And yes, I've hard hardware failures and botched software upgrades. They take time to resolve. But it's not a big burden, and I don't spent much time on this stuff.
> I have a much more complex setup than necessary
Yup.
> Getting to parity with the operationalization you get from a cloud platform takes more ongoing work.
You don't need this. Trying to get even remotely there will eat up your time, and that time is better spent doing something else. Unless you enjoy doing that, which is fine, but say that, and don't try to claim that self-hosting necessarily takes up a lot of time.
It's definitely mostly a hobby, but I also want to get something close to the dependability of a cloud offering.
I started small, with just a Raspberry Pi running Home Assistant, then Proxmox on an old laptop... growing to what I have now. Each iteration has added complexity, but it's also added capability and resiliency.
I love self-hosting and run tons of services that I use daily. The thought of random hardware failures scares me, though. Troubleshooting hardware failure is hard and time consuming. Having spare minipcs is expensive. My NAS server failing would have the biggest impact, however.
Other than the firewall (itself a minipc), I only have one server where a failure would cause issues: it's connected to the HDDs I use for high-capacity storage, and has a GPU that Jellyfin uses for transcoding. That would only cause Jellyfin to stop working—the other services that have lower storage needs would continue working, since their storage is replicated across multiple nodes using Longhorn.
Kubernetes adds a lot of complexity initially, but it does make it easier to add fault tolerance for hardware failures, especially in conjunction with a replicating filesystem provider like Longhorn. I only knew that I had a failed node because some services didn't come back up until I drained and cordoned the node from the cluster (looks like there are various projects to automate this—I should look into those).
Sure - self hosting takes a bit more work. It usually pays for itself in saved costs (ex - if you weren't doing this work, you're paying money which you needed to do work for to have it done for you.)
Cloud costs haven't actually gotten much cheaper (but the base hardware HAS - even now during these inflated costs), and now every bit of software tries to bill you monthly.
Further, if you're not putting services open on the web - you actually don't need to update all that often. Especially not the services themselves.
Honestly - part of the benefit of self-hosting is that I can choose whether I really want to make that update to latest, and whether the features matter to me. Often... they don't.
---
Consider: Most people are running outdated IP provided routers with known vulnerabilities that haven't been updated in literally years. They do ok.
Much easier with AI. Went from Webhosting all-in package + NAS to Hetzner Storage Share and a separate Emailprovider (Runbox). After a short time I dumped the Nextcloud instance and moved on to a Hetzner VPS with five docker containers, Caddy, proper authentication and all. Plus a Storage Box. Blogging/Homepage as Cloudflare Pages, fed by Github, domains from CF and porkbun, Tailscale, etc., etc. ad nauseam, NAS still there.
Most of this I didn't for many years because it is not my core competence (in particular the security aspects). Properly fleshed-out explanations from any decent AI will catapult you to this point in no time. Maintenance? Almost zero.
p.s. Admittedly, it's not a true self-hosting solution, but the approach is similar and ultimately leads to that as well.
Same for me. I’m not an engineer (but worked with them for 2 decades) and AI has been amazing for me self hosting.
For example I could never setup Traefik correctly because I just found it too complicated. Now I have Claude I finally got it setup just the way I want it - the ROI on my Claude subscription has been off the scale!
The obvious downside is that I might not really know what exactly I’m implementing and why. I do read all the explanations that Claude gives but it’s hard to retain this information. So there are pros and cons to relying on AI for this kind of stuff I suppose
Agreed. NixOS + Tailscale is 99% there for me. Using Claude Code to deal with whatever other package I need built with nix while I'm working on $day_job things helps get me to a fully working system. Besides the fact that running containers via podman or docker (your choice) is super easy via a NixOS config.
Combine that with deploy-rs or similar and you have a very very stable way to deploy software with solid rollback support and easy to debug config issues (it's just files in the ./result symlink!)
There are a lot of people that have made a lot of money and careers because developers in particular don't want to know or don't care to know how to manage this stuff.
They need to get over it.
Pick up some Ansible and or Terraform/tofu and automate away. It can be easy or as involved as you want it to be.
yes, I do agree with that sentiment, there are times when I'm spending way too much time restarting a service that went down, but it doesn't take as long as it used to, especially with AI assistance nowadays. If I'm spending too much time on it, then I'm also probably learning something along the way, so I don't mind spending that time.
Self hosting doesn't take as much effort as the cloud providers want us to believe.
I've been renting a VPS for 15-20 years from a small provider. It runs a webserver, gitea instance, matrix homeserver, and a bunch of other things, and I spend maybe an hour or two per month maintaining it. Add a few non-recurring hours if I want to set up something new or need to change something big.
Self hosting is not hard. It's not scary. It's not a security nightmare. All of that is just FUD.
Interestingly, Kotlin has a pretty solid cross-platform story.
I'd pick it over Swift if targeting Android since it can build and run in the JVM as well as natively -- and has Swift/ObjC interop. Its also very usable on the server if you wanted to, since you can use it in place of Java and tap into the very mature JVM ecosystem. If that's what you're into.
And I have a lot more faith in JetBrains being good stewards of the language rather than Apple, who have a weird collection of priorities.
Kotlin is practically a no-brainer when you have JVM at your finger tips, versus something like Swift which is comparatively young.
I tried to use Vapor with Swift recently and struggled to get something working because the documentation looked comprehensive, but had a lot of gaps. I ended up throwing it out because I didn't have the time to dig through the source to understand how to do something, when I could use a mature framework in any other language instead.
The promise is there but I'm just not ready to invest. My youthful days of unbounded curiosity are coming to an end and these days I just want to get something done without much faff.
Mind you, Kotlin/Native (which is what gets used when you're compiling for iOS) doesn't have access to the JVM.
However, the Kotlin community is fundamentally all about open source, whereas Apple & iOS Devs have an allergy to it. The quality and quantity is already miles above the vast majority of what's in the Swift ecosystem. https://klibs.io has all the native compatible libs. And if you're targeting a platform where the JVM is available then yeah, it's massive. Compose makes UI tolerable compared to JWT too. Even large projects like Spring are Kotlin first nowadays.
JetBrains has monetary interest in promoting Kotlin beyond Android, there’s zero incentive to promote Swift as the language outside of iOS and Mac. They don’t need to capture minds of devs for them to develop for Apple devices.
The language doesn’t really matter. The underlying SDK/framework is where the action is at.
However, I suspect that we may not be too far off, from LLMs being the true cross-platform system. You feed the same requirements, with different targets, and it generates full native apps.
Fully agree. I have zero Swift knowledge and currently use LLM to write a native app. I'm well aware of the SDKs and concepts in iOS development, so even if something's wrong I got intuition where to look and how to make the LLM fix it.
This is going to be used much more than Swift for servers. Swift is a primarily client-side mobile language. It makes sense that you tap into reusing the logic.
By excellent, you mean excellent at not being able to talk to someone about your real world problem and need to rely on your linkedin contacts to find someone to talk to?
In any serious business, you don't want people to use their personal Apple IDs: that could lock their company provided devices for ever when they leave, you also don't want to buy them apps that you won't be able to re-use when they leave, ...
The poster has built something that, while technically interesting, is profoundly annoying as a user and deserves to be backlashed to prevent more of this kind of stuff to be built
reply