I find that a very bold move, how will they reivent the wheel on the man-years of optimization work went into LLVM to their own compiler infrastructure?
Proebsting's Law: Compiler Advances Double Computing Power Every 18 Years
You need to implement very few optimizations to get the vast majority of compiler improvements.
Many of the papers about this suggest that we would be better off focusing on making quality of life improvements for the programmer (like better debugger integration) rather than abstruse and esoteric compiler optimizations that make understanding the generated code increasingly difficult.
as a comment about a particular project and its goals and timelines, this is fine. as a general statement that we should never revisit things its pretty offensive. llvm makes a lot of assumptions about the structure of your code and the way its manipulated. if I were working on a language today I would try my best to avoid it. the back ends are where most of the value is and why I might be tempted to use it.
we should really happy that language evolution has started again. language monoculture was really dreary and unproductive.
20 years ago you would be called insane for throwing away all the man-years of optimization baked into oracle, and I guess postgres or mysql if you were being low rent. and look where we are today, thousands of people can build databases.
There's an instructive example on the page. Suppose a page creates the databases `a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p`, then queries their order. They might get, for example `g,c,p,a,l,f,n,d,j,b,o,h,e,m,i,k`, based on the global mapping of database names to UUIDs.
The key vulnerability here is that, for the lifetime of that Firefox process, any website that makes that set of databases is going to see the exact same output ordering, no matter what the contents of those databases are. That makes this a fingerprint: it's a stable, high-entropy identifier that persists across time, even if the contents of those databases are not preserved. It is shared even across origins (where the contents would not be), and preserved after website data is deleted -- all a website has to do to re-acquire the fingerprint is recreate the databases with the same names and observe their ordering.
As I understood not ANY website can see it. But the same website can see it regardless if you reset your identity in Tor Browser.
So it persists between anonymous sessions.
So you could connect User A that logged out and reset the identity to User B who believed was using a fresh anonymous session and logged in afterwards.
No, it does allow identification across different websites (the article says "both cross-origin and same-origin tracking"). Both websites just need to create some databases with the same names. Since the databases are origin-scoped, these aren't the same databases, so you can't just write some data into one and read it on another website. But it turns out that if two websites use the same names for all these databases, the order the list of databases is returned in is random-per-user but the same regardless of website.
It's the mapping of UUIDs to databases that is shared across origins in the browser. Only the subset of databases associated with an origin are exposed to that origin.
That's incredibly stupid. Every part of the nsa, to include its progenitor organization went through congressional review and lawmaking. Incredibly stupid
Maybe the NSA they are aware of is just the facade. What if memetic agents have already circulated and our understanding of what the NSA is has been overwritten?
Given how shitty it looks and behaves, I was 100% sure this was an April Fools. But after reading the serious comments here on HN, I'm not sure anymore...
It's related to commits actually having a parent-child structure (forming a graph) and timestamps (commit/author) being metadata. So commits 1->2->3->4 could be modified to have timestamps 1->3->2->4. I know GitHub prefers sorting with author over commit date, but don't know how topology is handled.
> It's related to commits actually having a parent-child structure (forming a graph) and timestamps (commit/author) being metadata.
Yeah, I think everyone is aware. It's just that the last couple dozen commits, to me, looked like commits had been created in chronological order, so that topological order == chronological order.
> I know GitHub prefers sorting with author over commit date, but don't know how topology is handled.
> A much more effective counter to this would be to rebalance the information asymmetry by giving citizens the tools to coordinate against state sponsored influence.
reply