Hacker Newsnew | past | comments | ask | show | jobs | submit | biophysboy's commentslogin

pulling back as in setting more realistic token budgets, or something more drastic? I'm curious

Stopped using them altogether in the context of productivity - in essence they’re useless.

I can believe that. Gambler’s Ruin gets costly when you’ve actually got money on the line.

Entry ticket, basically. I went to Olympic Natl park recently and had the pass added to my Apple Wallet.

What's the difference here between adding the "custom pass" and just taking a photo of QR code? Just the fact that it is stored in Wallet instead of photos folder?

It's stored in Wallet so you can access it through the Wallet shortcut (double-press power button), when you open it the screen automatically brightens, and it's a perfectly clear QR code rather than a picture so it'll be easier to scan.

I don't really think of it as an app; I think of it as the "double-tap side button to do tap to pay or present my ticket" iPhone feature

Coming from a bio background, I’ve always been confused why auto fatality stats are normalized per miles driven. Epidemiological metrics like incidence or prevalence seem like they would work fine? Town A would be “safer” than town B if people’s commutes are 20% shorter, even if accidents occur w same frequency

Because it yields a simple corollary that to make travelling safer you can reduce the number of miles driven. Mostly by giving people viable alternatives to driving, be it long-distance rail or bike lanes to move around quicker and safer in the city.

Pretty sure I've seen exposure-adjusted incidence rates used in clinical trials.

Miles is simply a proxy for exposure.

Given risk here does vary by exposure time and trip length varies so much, it seems reasonable to use - at least in combination with crude rates.


Fair point - a combo might be the best approach.. I understand the idea of accidents correlating w/ miles driven, but it seems to be optimizing for driving safety rather than human life? Does that make sense?

What are some other better ways to normalize?

Other common approaches:

1. Per capita

2. Per registered vehicle

3. Per trip

All of these have upsides and downsides (as does “per vehicle km”), and all will paint different pictures with different distortions.


Honestly, just per 1M person per year. If this normal incidence went up while the exposure incidence rate went down over 20 years, I'd wanna know.

Per trip?

A better time range would be the average species lifespan of the plants and animals we eat. Too short a range highlights noise; too long a range highlights unrelated data.

I finished a PhD. My concrete advice is focus on feasible methods you know are realistic for your lab resource wise (time, money, etc)

Along with parsing various file formats, you can create duckdb files to store tables, and make related views, schema, etc. They also have a newer ducklake tool

I think a good corollary idea to "vibe coding" is the "vibe product". There is so much stuff popping in and out of existence and my excitement has declined.


Is this book just riffing about embedding space? I thought about reading it eventually, but the quoted passage is kind of annoying/tedious


No, it really just gets like that at the end which is what this chain has been going over.


Tech still broadly respects edgy, hot take contrarianism, even if they think Andreessen is stupid in this instance.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: