Hacker Newsnew | past | comments | ask | show | jobs | submit | ragnese's commentslogin

Yep. I've learned that lesson more than once. Maybe one of these days it'll stick... :p

Specifically, I'm not a "designer", but I regularly end up making/changing UIs (mobile apps, web apps/pages, etc). When it comes to design, it really matters who the target audience is.

If you're creating a UI for "mass market", you have to design to a lowest-common-denominator that balances what your average user expects, generally, from UI/UX, and the more you ask them to "invest", the worse you're going to do. On the other hand, if you're making a tool for a B2B (business-to-business) product, you have more freedom to set baseline expectations of what the end user needs to be able to do and understand. You can also expose more powerful options, etc. You can sometimes end up going in very different directions. Even error handling and logging can sometimes be handled differently, depending on the context.


> you have to design to a lowest-common-denominator that balances

Something that's always stuck with me is a bit from the book "Don't Make Me Think" about cost vs. value in attention and complexity, that when you add a feature used by only a small percentage of users you're "taxing" 100% of users for the benefit of a few. That you should optimize for the common path and not edge cases. Over two decades later I still find this an exceedingly difficult challenge to juggle that doesn't just mean hiding advanced features behind extra menus.


Menus, gestures, keyboard shortcuts, advanced versions of the interface locked behind preferences, and fully customizable menus (including user-defined macro buttons). There are a ton of ways to hide the complexity for common users without frustrating power users. The challenge for the designer is the taste/judgement to know which features to show/hide and where (as well as how to organize all the menus logically).

Yes! That's exactly how you should do it while working with a language that doesn't have a compiler that will aggressively analyze, and rewrite and optimize your code for you. (So, most languages with "heavy runtimes" that support a bunch of dynamic stuff and JITs)

There are basically two points to programming with immutable-first data. One, eliminate certain classes of data race concurrency bugs. Two, less mutable state in a given context makes it easier to reason about.

So, if you're inside a function scope and you aren't launching any concurrent operations from inside that function, you don't have to worry about benefit #1. If you're inside a function (and you're not reaching out for global mutable state), then the context you need to keep in your working memory is likely fairly small, so a few local mutable variables doesn't significantly harm "understandability" of the implementation (in most cases). So, you really don't have to worry about #2, either. Make your functions black boxes with solid "APIs" (type signatures), and let the inside do whatever it needs to make it work the best.

Just because premature optimization is the root of all evil, it doesn't mean we need to jump right to premature pessimization...


Yeah, and even if you need concurrency/parallelism within the function, it can be forgivable to use ConcurrentDictionary or ConcurrentBag or one of the many, many other thread safe mutable data structures built directly into .NET.

I will personally almost always prefer the pretty functional versions of things, and that's almost always what I start with. I like immutable data structures, and they are usually more than fast enough. Occasionally, though, you hit a bottleneck of some kind (usually in some form of loop), and you have to avoid all the beautiful functional stuff and go back to sad imperative stuff. When I do that, I usually try and keep it scoped to one function. Even within one function, I do find the persistent structures easier to reason about, but as you stated it's a small enough surface area to not be too irritating.

There are exceptions to this, of course. Sometimes for caching/memoizing I will make a global ConcurrentDictionary, and I'll use the interlocked thing to do global counters sometimes.


I find algo performance is a consideration, but so is overall system performance especially in the face of concurrency, staleness, update rate, data processing size, consistency of data, etc. I think persistent collections are just another tool which is sometimes appropriate; and it has saved me over the standard Concurrent collections in some interesting cases. There are significantly faster immutable collection libraries than the standard F# Map class though online you can use if I recall from awhile back - still not quite mutable perf though. It tends to be appropriate to use for almost the opposite case than a single thread in a tight loop which is the usual benchmark I guess. As usual YMMV/depends on problem at hand.

I haven't been able to fully justify it (and sadly I don't get paid for F# anymore :( ), but there was a competent port of the Scala CTrie structure available [1].

My local benchmarks got pretty decent performance, and often a bit better than the regular built-in concurrent structures, and any excuse to get rid of locks is generally a good excuse in my mind, but it was hard for me to push it when ConcurrentDictionary was fast enough and built in and maintained by a trillion dollar company.

[1] https://github.com/chrisvanderpennen/ctrie


Searched online for it - there is this one https://github.com/fsprojects/fsharp-hashcollections. YMMV.

Since it's obviously written in a casual, conversational, tone we should not expect the language to be perfectly precise. So, given that and the fact that the author felt the need to call out "vibe coding" or AI at all, and then double down by adding the almost-redundant "classic development style", I would be willing to bet they did not use any AI for anything at all related to this project.

Yes, what Postel's Law is about. That's the whole point of contrasting it with Hyrum's Law, no?

Hyrum's Law is pointing out that sometimes the new field is a breaking change in the liberal scenario as well, because if you used to just ignore the field before and now you don't, your client that was including it before will see a change in behavior now. At least by being strict, (not accepting empty arrays, extra fields, empty strings, incorrect types that can be coerced, etc), you know that expanding the domain of valid inputs won't conflict with some unexpected-but-previously-papered-over stuff that current clients are sending.


I don't think that interpretation makes that much sense. Isn't it a bit too... obvious that you shouldn't just crash and/or corrupt data on invalid input? If the law were essentially "Don't crash or corrupt data on invalid input", it would seem to me that an even better law would be: "Don't crash or corrupt data." Surely there aren't too many situations where we'd want to avoid crashing because of bad input, but we'd be totally fine crashing or corrupting data for some other (expected) reason.

So, I think not crashing because of invalid input is probably too obvious to be a "law" bearing someone's name. IMO, it must be asserting that we should try our best to do what the user/client means so that they aren't frustrated by having to be perfect.


I actually dont think it's that obvious at all (unless you are a senior engineer). It's like the classic joke:

A QA engineer walks into a bar and orders a beer. She orders 2 beers.

She orders 0 beers.

She orders -1 beers.

She orders a lizard.

She orders a NULLPTR.

She tries to leave without paying.

Satisfied, she declares the bar ready for business. The first customer comes in an orders a beer. They finish their drink, and then ask where the bathroom is.

The bar explodes.

It's usually not obvious when starting to write an API just how malformed the data could be. It's kind of a subconscious bias to sort of assume that the input is going to be well-formed, or at least malformed in predictable ways.

I think the cure for this is another "law"/maxim: "Parse, don't validate." The first step in handling external input is try to squeeze it into as strict of a structure with as many invariants as possible, and failing to do so, return an error.

It's not about perfection, but it is predictable.


Hmm. Fair point. It's entirely possible that it's not obvious and that the "law" is almost a "reminder" of sorts to not assume you're getting well-formed inputs.

I'm still skeptical that this is the case with Postel's Law, but I do see that it's possible to read it that way. I guess I could always go do some research to prove it one way or the other, but... nah.

And yes, "Parse, don't validate." is one of my absolute favorite maxims (good word choice, by the way; I would've struggled on choosing a word for it here).


Right even for senior engineers this can be hard to get right in practice. Parse, don't validate is certainly one approach to the problem. Choosing languages that force you to get it right is another.


This is also a problem, IMO, in having this optimization in PHP. Anonymous functions are instances of a Closure class, which means that the `===` operator should return false for `foo() === foo()` just like it would for `new MyClass() === new MyClass()`.

But, since when has PHP ever prioritized correctness or consistency over trivial convenience? (I know it's anti-cool these days to hate on PHP, but I work with PHP all the time and it's still a terrible language even in 2026)


I never understood why people think somehow PHP is fine now, and I've had that opinion expressed several times on HN. The best I can make out is that people's expectations are so dismal now that they're like "Well new versions fixed 2 of the 5 worst problems I noticed, so that's good right?"


Because PHP is a amazing backed language for making CRUD apps. Always has been.

It has great web frameworks, a good gradual typing story and is the easiest language to deploy.

You can start with simple shared hosting, copy your files into the server and you are done. No docker, nothing.

Sure it has warts but so have all mainstream programming languages. I find it more pleasant than TypeScript which suffers from long compile times and a crazy complex type system.

The only downside is that PHP as a job means lots of legacy code. It a solid career but you will rarely if ever have interesting programming projects.


It’s a “terrible” language? That’s news to me. What’s “terrible” about it?

> `new MyClass() === new MyClass()`

Does that look like the code you’re writing for some reason? Because I’ve seen 100k loc enterprise PHP apps that not once ran into that as an issue. The closest would be entities in an ORM for which there were other features for this anyway.


Its bad indeed. Its unfixable at this point. We just get bolton features.


We could do something like `#function() {}` or `#() => {}` which makes a function static.


I'm especially angry that if you go to reddit.com in a mobile browser, it will sometimes fully block you from certain subreddits (not just NSFW ones) and tell you that you can only access it from the app. Meanwhile, you can easily visit the exact same subreddit by typing old.reddit.com/r/whatever. The outright lying bothers me so much. I refuse to be desensitized to lying just because everyone is lying all the time; it's still really wrong, and they really should be ashamed of themselves.


reddit browser behavior got me into using frontends for various sites, such as redlib dot privacyredirect dot com

there are surprisingly many of them for pretty much every social media website.


When you say "meme", it sounds like it might not be true. But, a few years ago I handed my stepson a USB flash drive with some files on it, he plugged it into his laptop and the very first thing he did was launch Google Chrome and then not have any clue what to do to access the files (it was a Windows laptop).


One of the most enraging things about life since 2005-ish is that no matter how private and careful I am, it doesn't even matter because every other inconsiderate fool I know and interact with will HAPPILY let some random company have access to THEIR contacts--which includes me--in order to play Farmville for a month until they get bored of that and offer up my private information to the next bullshit ad company that asks for their contacts.

It used to frustrate me that people didn't care about their own privacy, because I genuinely didn't want evil people to hurt them. But, it's even more angering that people don't have the common decency to consider whether their friends and family would want them sharing their phone numbers, email addresses, photos of them, etc.


Famously, that's how shadow profiles got created for Facebook and LinkedIn and many others.


Or add your real name to photos of you stored in Google Photos.


Yep. If someone is trying to make you do something, or stop doing something, or buy something, your first question should always be "Why?".

Why would someone try to force me off of my browser (that has ad-blocking and tracker-blocking mitigations) and on to a locked-down app that may want permission to run in the background, display notifications, access my files or camera, etc?

Maybe it really is to "improve my experience"... yeah, right.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: