Hacker Newsnew | past | comments | ask | show | jobs | submit | more techcode's commentslogin

Where did "AI for inference" and "semantic tagging" come from in this discussion? Typically for code repositories - AIs/LLMs are doing reviews/tests/etc, not sure what/where semantic tagging fits? Even do be done manually by humans.

And besides that - have you tried/tested "the amount of inference required for semantic grouping is small enough to run locally."?

While you can definitely run local inference on GPUs [even ~6 years old GPUs and it would not be slow]. Using normal CPUs it's pretty annoyingly slow (and takes up 100% of all CPU cores). Supposedly unified memory (Strix Halo and such) make it faster than ordinary CPU - but it's still (much) slower than GPU.

I don't have Strix Halo or that type of unified memory Mac to test that specifically, so that part is an inference I got from an LLM, and what the Internet/benchmarks are saying.


Perhaps I'm missing something... If your commits are not all independent - I don't see how could they ever be pulled/merged independently?


The way Gerrit handles this is to make a series of PR-like things that are each dependent on the previous one. The concept of "PR that depends on another PR" is a really useful one, and I wish forges supported it better.


TL;DR: school/tests/exams don't allow phones.

Here in NL - Casio FX-82NL is allowed during test/exams for middle/high school, and actually for Radio Amateur/HAM licence exam - they even hand you one of their FX-82NLs.

Other more advanced (graphing, with memory/Python/etc) are also allowed in some places, but they need to be set to exam mode that disables memory/python/etc.


The tring that Ukraine and Arab Spring have in common - is that same folks that managed to bring Milošević down in Serbia (known as Resistance/Otpor), later went on to talk/teach protestors in Ukraine, Egypt ...etc.

Check out #Post Milošević; and #Legacy; sections on https://en.wikipedia.org/wiki/Otpor (couldn't figure out how to get deeplinks on mobile).

TL;DR: Besides Ukraine and Egypt, they went to a few more places, in some it worked, in others it didn't. And there were revelations of foreign (e.g. USAID) funding.


Of course you can fake a small/large crowd in a protest.

From the top of my head I can think of news reporting both "few (tens of) thousands" vs "hundreds of thousands" (different news reporting different numbers/estimates/etc) in 2025 protests in Serbia/Belgrade, as well as those comparisons of Obama vs Trump inauguration news/photos.

Meanwhile to you as an individual there on the spot - both crowds of say 50K-100K and 1M+ look basically the same = "huge amounts of people in every direction that you look".


Counting large crowds is hard, but the tools continue to improve: we have increasingly advanced drone photography and access to better AI tools to generate more reliable estimates.

If crowd sizes become a significant point of contention it'll become increasingly commonplace for multiple parties to take lots of aerial video and photos that serve as independent verification. You could probably get a pretty accurate estimate of how many people show up to an event by sending drones to take photos every 15 minutes.

In any case, I think the problem you highlight is more focused towards the upper-end, while I was thinking about the lower end of the spectrum. Where some people might be very vocal online, but they're unable to gather more than a dozen or two people for any given protest. If a protest is gathering an unknown number of people that ranges between 100k and 1 million that sounds like a really good problem to have.

Your criticism of inconsistent people estimates are valid, I'm not sure if newspapers have published the set of tools and criteria that they use when generating these estimates, so that's an area where it would be great to see increased transparency.


While 100K itself is indeed impressive - the order of magnitude difference between 100K and 1M makes a lot of room for interpretations, rationalizations, spins ...etc.

The "publishing the set of tools and criteria used to generate estimates" is happening, and so far it seems that usually doesn't matter.

It doesn't matter because of course those sources/news that report wildly wrong (be it larger or smaller numbers) are usually (not always, but very commonly) controlled by the governments.

So despite students that organized the biggest protests in Belgrade giving their estimates (based on combo of RSVP and how many people accommodated people from other cities). And those being close to independant research (using drone footage, VR/AR crowd simulations, AI) with loads of posts/videos providing detailed explanations ...

Most "ordinary people" saw (and keep seeing) just the "official version".


In the age of centralized broadcasting where everyone watched the same TV channels ...

Those TV channels were virtually always (and to this day still are) controlled by "the government".

Meanwhile other TV channels, if there even were any, and if enough people even had chance to watch them (because limited frequency/transmission allocations, artificial limits on cable distribution ..etc) - were and still are labeled as "funded by foreign (state) actors that are trying to destabilize our independance/values/etc".

And it's more of the same online.

---

This reminds me of an old website that's an absolute gold mine.

Knock yourself out https://changingminds.org/explanations/theories/minority_inf...


I haven't gotten around to experimenting with https://wiki.calculate-linux.org/templates and https://old.calculate-linux.org/main/en/calculate-assemble

TL;DR: you can pre-configure and keep updating/building new versions of your own live-boot image of Gentoo/Calculate. Which kind of get's you "previous known-good builds" just the other way around.

Oh and the other thing I also never needed to use is update/rescue of Gentoo/Calculate installation through it's flip-flopping between two root partitions.

Calculate installer by default creates two root partitions, but I've only ever used one. And so far `cl-update` never broke the system - even when I was so far behind that my version of python and glibc got masked (or maybe even removed).

Back on vanilla Gentoo - being that far behind usually meant it was easier to reinstall Gentoo from stage3 :D


When you say "There are precompiled versions of big popular binaries" - were you thinking of "firefox-bin" and such?

I think that for some years already - Gentoo has been providing binaries for "normal" packages - as long as your config/use-flags match (and if you turned on the option/flag to use binary packages).

And of course places with more than just a few Gentoo boxes were usually already running their own BINHOST setups long time ago.


Download image from https://www.calculate-linux.org/ - put it on USB flashdrive.

And it's literally yes, yes, next, next - the defaults are pretty good.

1) Calculate Linux is 100% Gentoo with more profiles (e.g. server, desktop-kde, desktop-gnome ...etc) and after switching from vanilla Gentoo to Calculate - I didn't need to tweak any use flags of any packages.

Profiles are so good that everything works nicely together

2) There are prebuild binaries for your profile use-flag combo - can't recall last time I had to wait for something to compile

3) Much less likely to happen since you get binaries for everything - but there's additional cl-xxx tooling that makes even that easier

4) I don't think that's a bad thing. Though sure I could agree that having option to automatically restart services would be nice.

5) Yes - and you can also archive and basically have git-log on conf changes.


> Calculate Linux is 100% Gentoo with more profiles (e.g. server, desktop-kde, desktop-gnome ...etc) and after switching from vanilla Gentoo to Calculate - I didn't need to tweak any use flags of any packages.

If that's your thing, sure. I find even Gentoo too automated for my preferences. I'm using the most basic from the available profiles and tweak everything manually in package.use. I stopped using openrc and switched to just sysvinit/inittab.

But then, if you want binary packages and such, why use Gentoo or a fork?


This "I'm using the most basic from the available profiles and tweak everything manually in package.use" sounds like me 10-15 years ago.

Then one day at work I wanted to print something and I think I needed to add LDAP and CUPS use flags ... Rebuilding world with those new flags was not finished by the time I was back from lunch break, or maybe it even failed.

Then I discovered Calculate and it's desktop (e.g. KDE) profile turned out to have all those useful use flags already set in it's profiles.

Anyway ...

IMHO main reason to choose/stay with Gentoo/Calculate is flexibility and choice (like not having to use systemd, but also being able to). Habit is a part too - though due to work I've got familiar with CentOS and Ubuntu.

I don't necessarily want binary packages. Sure they are handy/convenient for speed/ease/etc. And even though I can't recall last time I needed to tweak some package/feature use flag (maybe V4L2 virtual camera in OBS?) - I really don't want to give that flexibility up ... As without it - it would be back to manually figuring out compile/run-time dependencies when all you want is just slightly differently configured/built package.


Try https://www.calculate-linux.org/ - it's 100% Gentoo, with prebuilt binaries ;)


Doesn't Gentoo itself offer binary installation now?


Yes it does.

AFAIK Calculate provides more profiles (predefined set of use flags) - instead of just Gnome or KDE/Plasma - it also has Cinnamon, LXQt, MATE and Xfce, as well as one for server(s).

And Calculate also provides binaries for those profiles.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: