Hacker Newsnew | past | comments | ask | show | jobs | submit | Meneth's commentslogin

They kinda are: "This issue is to fully eliminate LLVM, Clang, and LLD libraries from the Zig project." https://github.com/ziglang/zig/issues/16270

Yes, as a backend. Clang as the `zig cc` frontend will stay (and become optional) to my knowledge.

libraries, not processes.

I find that a very bold move, how will they reivent the wheel on the man-years of optimization work went into LLVM to their own compiler infrastructure?

They're just removing the obligate dependency. I'm pretty sure they will keep it around as a first-class supported backend target for compilation.

All that will still be available just not in main zig repo. Someone may have asked same question about LLVM when GNU compiler exist.

Proebsting's Law: Compiler Advances Double Computing Power Every 18 Years

You need to implement very few optimizations to get the vast majority of compiler improvements.

Many of the papers about this suggest that we would be better off focusing on making quality of life improvements for the programmer (like better debugger integration) rather than abstruse and esoteric compiler optimizations that make understanding the generated code increasingly difficult.


as a comment about a particular project and its goals and timelines, this is fine. as a general statement that we should never revisit things its pretty offensive. llvm makes a lot of assumptions about the structure of your code and the way its manipulated. if I were working on a language today I would try my best to avoid it. the back ends are where most of the value is and why I might be tempted to use it.

we should really happy that language evolution has started again. language monoculture was really dreary and unproductive.

20 years ago you would be called insane for throwing away all the man-years of optimization baked into oracle, and I guess postgres or mysql if you were being low rent. and look where we are today, thousands of people can build databases.


I'm confused.

The IndexedDB UUID is "shared across all origins", so why not use the contents of the database to identify browers, rather than the ordering?


There's an instructive example on the page. Suppose a page creates the databases `a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p`, then queries their order. They might get, for example `g,c,p,a,l,f,n,d,j,b,o,h,e,m,i,k`, based on the global mapping of database names to UUIDs.

The key vulnerability here is that, for the lifetime of that Firefox process, any website that makes that set of databases is going to see the exact same output ordering, no matter what the contents of those databases are. That makes this a fingerprint: it's a stable, high-entropy identifier that persists across time, even if the contents of those databases are not preserved. It is shared even across origins (where the contents would not be), and preserved after website data is deleted -- all a website has to do to re-acquire the fingerprint is recreate the databases with the same names and observe their ordering.


As I understood not ANY website can see it. But the same website can see it regardless if you reset your identity in Tor Browser.

So it persists between anonymous sessions. So you could connect User A that logged out and reset the identity to User B who believed was using a fresh anonymous session and logged in afterwards.


No, it does allow identification across different websites (the article says "both cross-origin and same-origin tracking"). Both websites just need to create some databases with the same names. Since the databases are origin-scoped, these aren't the same databases, so you can't just write some data into one and read it on another website. But it turns out that if two websites use the same names for all these databases, the order the list of databases is returned in is random-per-user but the same regardless of website.

OK, that's even worse. Thanks.

The content is obviously scoped to an origin, or IndexedDB would be a trivial evercookie.

It's the mapping of UUIDs to databases that is shared across origins in the browser. Only the subset of databases associated with an origin are exposed to that origin.

NSA never cared about rules.

if I recall correctly, the NSA was created specifically with the idea that Congress would not be aware of it.

That's incredibly stupid. Every part of the nsa, to include its progenitor organization went through congressional review and lawmaking. Incredibly stupid

Maybe the NSA they are aware of is just the facade. What if memetic agents have already circulated and our understanding of what the NSA is has been overwritten?

I wonder what the new one is now that everyone knows about NSA

"No Such Agency"

Text of the post has been [removed]. Original saved here: https://web.archive.org/web/20260403163241/https://old.reddi...


Maybe the moderators removed it for being AI spam. The user’s entire post history besides this post are generated ads for their AI projects.


Thanks, we'll put that link in the toptext as well.


"solve security" - that's an April Fools joke if I ever heard one.


Given how shitty it looks and behaves, I was 100% sure this was an April Fools. But after reading the serious comments here on HN, I'm not sure anymore...


You can certainly solve WordPress well-known security issues by dropping WordPress, hard to argue with that.


Me too, main reason I switched to Debian.


It seems the commits aren't in proper date order. Here are some newer changes, placed before the latest commits: https://github.com/EnriqueLop/legalize-es/commits/master/?af...


It's related to commits actually having a parent-child structure (forming a graph) and timestamps (commit/author) being metadata. So commits 1->2->3->4 could be modified to have timestamps 1->3->2->4. I know GitHub prefers sorting with author over commit date, but don't know how topology is handled.


> It's related to commits actually having a parent-child structure (forming a graph) and timestamps (commit/author) being metadata.

Yeah, I think everyone is aware. It's just that the last couple dozen commits, to me, looked like commits had been created in chronological order, so that topological order == chronological order.

> I know GitHub prefers sorting with author over commit date, but don't know how topology is handled.

Commits are usually sorted topologically.


To fight a thing, you must think about it.

The best way to avoid an -ism is to forget about it.

The fighters cannot forget, so they fall into a trap of their own making.


That is still a very good template for how a simple website should be written.


> A much more effective counter to this would be to rebalance the information asymmetry by giving citizens the tools to coordinate against state sponsored influence.

Which tools, specifically? I know none.


I mean that we are in dire need of such tools!

I also am not aware of any existing tools.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: