Hacker Newsnew | past | comments | ask | show | jobs | submit | monocasa's commentslogin

For Voyager 1, Jupiter's gravity assist was the only one that increased velocity, the flybys ultimately sapped velocity.


There's also SECCOMP_RET_USER_NOTIF, which is typically used by container runtimes for their sandboxing.

SECCOMP_RET_USER_NOTIF seems to involve sending a struct over an fd on each syscall. Do they really use it? Performance ought to suffer.

Also gVisor (aka runsc) is a container runtime as well. And it doesn't gatekeep syscalls but chooses to re-implement them in userland.


SECCOMP_RET_USER_NOTIF appears to switch between the tracee and tracer processes for each syscall. Using SECCOMP_RET_TRAP to trigger a SIGSYS for every syscall in IO intensive apps introduces 5% overhead (and avoids a separate tracer).

I wonder if there's any mechanism that works for intercepting static ELF's like Go programs and such.


They use a seccomp filter to decide which syscalls get sent to the other process for processing.

35B, but your point stands I think.

Totally agreed. For instance that's why sel4 just throws the whole binary at the proof engine. That takes any runtime (minimal in sel4's case) and compiler (not nearly as minimal) out of the TCB.

Has anyone tried to horizontally scale jellyfin to running on a multi node cluster?

I'm wanting to set it up for around 20 households to share, and with transcoding that exceeds a single (cheap) node.


For hardware acceleration you might be interested in the remote hardware acceleration strategy...

https://jellyfin.org/docs/general/post-install/transcoding/h...

The jellyfin DB itself is unfortunately sqlite instead of being DB agnostic. Maybe you could hack together something such that only one node handles writes and everyone else handles reads... if getting multiple cheap nodes gets your more bandwidth. I have to imagine that jellyfin fairly quickly stops being in charge of the media stream directly.

But yeah I think the transcoding and the size of your data pipe is the only "hard" part. The DB read/writes themselves are going to not be an issue (I think)


The database changes late last year is laying the grounds for other database engines[1].

[1] https://jellyfin.org/posts/jellyfin-release-10.11.0/#the-lib...


If Jellyfin ever fixes the mountain of bugs from the "upgrade". They aren't even acknowledging major bugs that make Jellyfin unusable for like 20% of users.

Do not upgrade Jellyfin if you have a sizeable library. Backup first if you do.


FWIW, I have managed 10 simultaneous live transcoded streams on a ARC B580 and it could have managed a few more. With couple of them you cold be fine.

The other aspect is you could share the media storage over NFS and have multiple jellyfin instances running for different houeshold groups.

With 2 or 3 nodes like that I think you could make it work.


Jellyfin isn't designed to be clustered.

For your use case, deploying multiple instances would be the way to go.


I only have an cpu with hardware acceleration that is used for transcoding and even that can handle a couple of streams transcoding simultaneously. The biggest thing is getting people to use clients that support direct play.

Yes, it was annoying, SQLite sucks as single source of truth for clusters, and it cost less than $100 to just buy hardware that can handle multiple high res transcoding sessions at once, but not 20 households' worth.

And that plex really pissed off the community by changing its pricing model.

I mean, as someone who was in that situation as a customer, we couldn't find a great cloud option for our needs, and we ended up building our first hardware lab with a bunch of macs.

It definitely caused us to buy macs we would have rented and shared.


Correct, us as well, but we’re mainly harvesting refurbished Mac Mini’s.

My biggest problem is the lack of a good CI/CD flow when you can’t work with images and virtual machines. We’re using ansible now to manage the fleet and I’m not a fan.

If they would more than 2 VMs, we’d still buy the hardware, we’d just buy larger ones and have more virtual machines on them. Very likely also use Linux as the host.

I hope one day Apple sees the light like Microsoft also did, but I’m not hopeful.


A big part of the reason is that Orion (and Apollo) reentry speeds are way higher due to the orbital mechanics involved in going to the moon and back. Today's was actually the fastest manned reentry ever attempted.

For reference the shuttle generally reentered at ~17.5K mph, and today's was 24K-25K mph.

It's not clear that we could build a craft with wings that could survive that. So then you're looking at adding fuel just to slow down, plus fuel for the weight of the wings themselves, plus fuel to carry all this extra fuel to the right place, etc.


Since these speeds are hard to grasp: That's about 416 miles a minute, or 7 miles a second. It traveled a mile in about the time it takes for a person to blink.

What would prevent them from entering into an orbit around Earth for a day or so and use that to slow down? Is that possible and would that make the reentry less risky?

Possible, but far too expensive due to the all of the fuel that would have to be carried the entire way and back. Expensive in a monetary sense, absolutely, but also in the sense that much less mass would be available for every other component of the mission.

A flash drive with a port on each side (one RO and the other RW) would be neat.

Why not a simple switch, not unlike on SD cards (but implemented on the device, not host/reader, and enforced by said device)?

Though yes, two USB ports would definitely work; it's just that the concept might be better served by providing two different connectors (e.g. USB-A & USB-C), as is common nowadays.


Yes.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: