> In the enterprise space, if you mention globally reachable address space, the discussion tends to end pretty fast because “its not secure”.
Topic drift, but for younger people who didn't live it, that's how it used to be!
For most of the 90s my workstation in the office (at several employers) was directly on the Internet. There were no firewalls, no filtering of any kind. I ran my email server on my desktop workstation to receive all emails, both from "internal" (but there was no "internal" really, since every host was on the Internet) people and anyone in the world. I ran my web server on that same workstation, accessible to the whole Internet.
That was the norm, the Internet was completely peer to peer. Good times.
Very early in my career I'd take these vulnerability reports as a personal challenge and spent my day/evening proving it isn't actually exploitable in our environment. And I was often totally correct, it wasn't.
But... I spent a bunch of hours on that. For each one.
These days we just fix every reported vulnerable library, turns out that is far less work. And at some point we'd upgrade anyway so might as well.
Only if it causes problems (incompatible, regressions) then we look at it and analyze exploitability and make judgement calls. Over the last several years we've only had to do that for about 0.12% of the vulnerabilities we've handled.
JVM is fast for certain use cases but not for all use cases. It loads slowly, takes a while to warm up, generally needs a lot of memory and the runtime is large and idiosyncratic. You don't see lots of shared libraries, terminal applications or embedded programs written in Java, even though they are all technically possible.
The JVM has been extremely fast for a long long time now. Even Javascript is really fast, and if you really need performance there’s also others in the same performance class like C#, Rust, Go.
Hot take, but: Performance hasn’t been a major factor in choosing C or C++ for almost two decades now.
I think it is the perception of performance instead of the actual performance, also that C/C++ encroaches on “close to the metal” assembly for many applications. (E.g. when I think how much C moves the stack pointer around meaninglessly in my AVR-8 programs it drives me nuts but AVR-8 has a hard limit and C programs are portable to the much faster ESP32 and ARM.
A while back when my son was playing Chess I wrote a chess engine in Python and then tried to make a better one in Java which could respect time control, it was not hard to make the main search routine work without allocating memory but I tried to do transposition tables with Java objects it made the engine slower, not faster. I could have implemented them with off-heap memory but around that time my son switched from Chess to guitar so I started thinks about audio processing instead.
The Rust vs Java comparison is also pointed. I was excited about Rust the same way I was excited about cyclone when it came out but seeing people struggle with async is painful for me to watch and makes it look like the whole idea doesn’t really work when you get away from what you can do with stack allocation. People think they can’t live with Java’s GC pauses.
> The problem is that there's no overt way to tell whether the "car" (code) you're looking at is someone's experimental go-kart made by lashing a motor to a few boards, or a well tested and security analyzed commercial product, without explicitly doing those processes on your own.
Yes you can, companies just don't like the answer.
To run with that analogy, if you are setting up that taxi company, will you build your fleet by picking up free gokarts around the neighborhood, or by purchasing cars from a known manufacturer who has gone through crash testing etc?
Not particularly different for software. If you need certified quality, you need to pay the providers fairly substantial amounts of money for that.
You would note that known manufacturers only sell non-repairable insecure spyware on wheels and instead pick up the libre gokart designs the groups of neighborhood kids made, build a few of them, try them out, figure out all the safety/repairability/design flaws, fix those, publish your fixes (either back to the kids or in forks), hire some of the kids, start selling services, share some of your profits to the kids whose designs you chose, and otherwise help the community around the original designs etc.
> That means being honest about when a pet project is just a pet project rather than talking about every POC as if it’s production ready.
And who isn't honest about it? Read the contract you have with the provider.
There is a way to legitimately expect production-ready libraries: You sign a purchase order for the right to use that code for a year (typically, or multi-year) and pay a quite substantial amount of money for that. Then you have purchased the right to expect a certain level of quality (details can be in the contract and reflected in the price).
If you're using something for free without having agreed to such a contract and paid the vendor accordingly, then you can expect exactly as much as you paid for it.
You’re twisting my argument. I’m not saying maintainers are obligated to make their code production ready. I said their READMEs should accurately represent the state of the project.
If you, or anyone else, thinks that is an unfair assessment or that I should have to pay for a README not to claim to be production ready when it’s a POC, then you had a very weird view on how much effort it takes to write the line “this is an untested beta”
> The state of expectations is usually in the LICENSE file, not in the README.
No it’s not. LICENSE tells you what you can do with the code. It doesn’t tell you the state of code.
Again, I need to reiterate my point that I’m talking about whether the code is beta, tested, etc. It costs nothing for a maintainer to specify how complete a code base is.
It’s then up to the consumers of that package to decide if they want to use it, contribute back or just fork it.
All I’m saying is too many GitHub repos are written for CVs; as if they’re meant to be consumed by Google. If something is a weekend project then just be honest and say “this is a weekend project written to solve my specific use case but PRs are welcome”. Thats better than having long blurbs that refer to the solo developer as “we” and all the other BS people stick into their READMEs to try and make their project sound better than it actually is.
All I’m asking for is a little more pragmatism and honesty in READMEs. It’s no extra effort. It’s no extra cost. And I shouldn’t have to donate to projects just to ensure they don’t lie to me.
It is interesting how every time this topic comes up it ends up with people for the most part in one of two camps: either the license is meant to be taken literally, or those who believe otherwise.
Of all the files in a repo, the LICENSE file is the one most important to take literally since it is a legal document.
So when it says something like (this from MIT license):
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
That is actually exactly what it means. There is specifically not even an implied suggestion that the software is suitable for your use.
If you want any kind of guarantee of the code being of any use to you, sign a contract and pay for it.
Unless you pay for that contract, nobody owes you any kind of hint as to whether the code is useful for anything. It very clearly says so in the LICENSE file.
It really isn’t much to ask people not to bullshit in their own README. That’s literally all I’m asking for. If you don’t want to offer software guarantees then don’t write your README like you offer it. It’s really that simple.
And your comment that I should pay every…single…maintainer of every…single…project on GitHub, just for them to disclose whether or not their project is experimental… well that’s just insane and completely misses the point of open source.
If we are talking about businesses relying on open source libraries then that would be a different matter. But not ever fscking thing being built is a VC-backed startup.
I say all of this as an open source maintainer. Just be honest in your READMEs.
A little honesty in the README costs nothing and we should be expecting more of it. And suggesting we lock that honesty behind a paywall is possibly the worst idea for open source imaginable. That simply isn’t the right way to monetise open source.
Edit: just to add, even if we were talking about software quality (which we wasn’t) paying for software doesn’t guarantee to you a better product. I could name a multitude of commercial solutions that I gave up on because the open source alternative was at least equivalent. But often even superior. And that’s before we talk about then enshitification phenomena.
Edit 2: sorry for all the edits. I should have just waited until I had proper time to reply calmly rather than commenting while doing chores and stuff around the house. My bad.
I'm not suggesting honesty should be behind a paywayll. Honesty should be upfront and it already is, in the license.
The license (most, anyway) clearly state the software is not necessarily suitable for any purpose and that's all you get.
If you require a higher level of confidence than none, then you should be paying for a support contract.
All I can say without being repetitive is that if you expect more than nothing when the license specifically says all you can expect is nothing, you might be disappointed more often than not.
This mind set of yours is a relatively recent development. Open source never used to be like that. And in fact, if it did operate the way you claimed, the open source ecosystem would never have grown into even remotely the size it is today.
And why you describe absolutely is not how any of more reputable libraries nor maintainers approach open source software.
So you might feel like you are legally correct. But you’re still completely missing the point.
> This mind set of yours is a relatively recent development.
I published my first open source (we didn't call it that) program in the 1980s.
I didn't know anything about licenses back then (and the open source licenses we use today didn't yet exist), so I just put a header comment on it saying something along the lines of "This code is public domain, feel free to try it but I can't guarantee it'll work for you." (although I can't remember exact wording). This is how it's always been.
> But you’re still completely missing the point.
I suggest considering the fact that you're disappointed that open source libraries aren't providing the guarantees you feel they owe you (which they don't) is an indicator that perhaps your expectations are not in line with what open source is all about.
> Hopefully, this will finally lead to a shift in thinking so that security practices like those used in GrapheneOS become more widespread in the future. Most software developers simply patch security vulnerabilities as soon as they become aware of them rather than taking preventive measures where possible.
Working in this space, I'm worried the future is even more reactive than today. Today I can get teams to review for security the architecture before implementing, and to review the implementation for security before shipping.
As teams move (more specifically, are forced to move) to vibe coding the whole thing, nobody knows what the design is or what the implementation looks like. Vibe all the way because the CEO says so or you're fired. This means the only place to catch vulnerabilities becomes after the fact, which is usually too late.
Topic drift, but for younger people who didn't live it, that's how it used to be!
For most of the 90s my workstation in the office (at several employers) was directly on the Internet. There were no firewalls, no filtering of any kind. I ran my email server on my desktop workstation to receive all emails, both from "internal" (but there was no "internal" really, since every host was on the Internet) people and anyone in the world. I ran my web server on that same workstation, accessible to the whole Internet.
That was the norm, the Internet was completely peer to peer. Good times.
reply