>what I think has drastically changed over the past 40 years or so is the ability of a solopreneur to make real money. Just look at all the posts on HN asking about how much people make on their side gigs.
is someone making money on a side gig really a solopreneur? By definition a side gig is not something you expect to make significant amounts of money on. if a side gig starts generating significant amounts of money then you would probably make it your primary gig.
since the suggestion is that the new security bug finding LLMs will increase protection because it will have access to the full source code then, the dark forest fear would be, if it is possible for an attacker to get all the source the attacker will be in a better position.
This seems wrong however, as it ignores the arrow of time. The full source code has been scanned and fixed for things that LLMs can find before hitting production, anyone exfiltrating your codebase can only find holes in stuff with their models that is available via production for them to attack and that your models for some reason did not find.
I don't think there is any reason to suppose non-nation state actors will have better models available to them and thus it is not a dark forest, as nation states will probably limit their attacks to specific things, thus most companies if they secure their codebase using LLMs built for it will probably be at a significantly more secure position than nowadays and, I would think, the golden age of criminal hacking is drawing to a close. This assume companies smart enough to do this however.
Furthermore, the worry about nation state attackers still assumes that they will have better models and not sure if that is likely either.
Any single company might be able to proactively defend themselves from attackers, but will companies invest the tokens in this? Most people simply don't care until it's too late.
And in a world where companies begin to suffer from attacks as a result - can the ones who are willing to invest in security defend themselves, not just against cyberattackers, but against a broader investor and customer backlash that believes that startups that build their own technology stacks are riskier due to perceptions about cybersecurity?
An angel investor or LP who sees news articles and media about cyberattacks, then has a portfolio company get hacked in a material way, may simply decide the space has become too risky for further investments, no matter how much prospects get on better security footings.
The dark forest hypothesis, at its core, is about a decision of whether to put your neck out in the universe; if the weapons and countermeasures being used are too horrifying to fathom, the risks unquantifiable, one chooses not to extend one's neck. And that is how an industry begins to dry.
The pressure by internal auditors and cyber insurance providers to implement these programs will be strong. I have been at organizations where EDR was added only due to the board of directors following the recommendation of 3rd parties. Of course, there will be new companies that haven't achieved the maturity to have had these pressures. But new companies being thoroughly compromised is hardly a recent phenomenon.
I guess the connection would be human history, a dark forest is a scene of lawlessness and violence and danger in much of that history - at least where stories are concerned.
In the use of the phrase Dark Forest to explain the Fermi paradox it suggests that alien civilizations have kept themselves dark out of fear that the rest of the forest is actually lawless and violent.
In this case though we are entering a dark forest, like Hansel and Gretel, supposedly defenseless against the monsters that lurk in there, but really - they weren't that defenseless were they? I don't think the phrase that apt.
> In the use of the phrase Dark Forest to explain the Fermi paradox it suggests that alien civilizations have kept themselves dark out of fear that the rest of the forest is actually lawless and violent.
It's more complicated.
For the Fermi paradox version of the 'Dark Forest' to work, you need civilisations to actively go out and destroy any other form of life they find announcing themselves:
> The "dark forest" hypothesis presumes that any space-faring civilization would view any other intelligent life such as theirs as an inevitable threat and thus destroy any nascent life that makes itself known. As a result, the electromagnetic radiation surveys would not find evidence of intelligent alien life.
Wikipedia has a section on game theory etc.
Without this additional element (basically the version you describe), the dark forest theory doesn't explain the Fermi Paradox: it's just another filter that might perhaps exclude 90% of civilisations, but many civilisations would still be dumb enough to announce themselves. Humans certainly did and keep doing so: it only needs a some people to send a message, and near unanimity to not send anything.
(And that's completely ignoring that our very atmosphere with its chemical imbalance has been sending a strong message of "there's probably life here" for billions of years now. Even our own technology, still in its infancy, is increasingly able to pick up clues about the chemical composition of the atmosphere of exoplanets ever further away from us. And we are still getting better quickly.)
If you add the element that other civilisation are hiding, but come out of hiding just to strike, that breaks down as soon as you have more than two players. Or even just the faint possibility of more than two players.
When you know there are only at most exactly to players, and you are the lurker and find someone else being 'noisy': yes, you have an incentive to strike. When there might be other third parties lurking, you better stay quiet, lest you invite a strike by a third party against you.
I'm not sure I understand the benefit of compiling to both JavaScript and WASM? Since I would normally expect to use both in the same environment, probably I haven't considered something or am overly tired at the moment so asking sincerely.
but they have already, rhetorically, dealt with anyone that might come with some sort of context that does not agree with their conclusion:
>There’s a particular kind of person who can’t accept that story at face value, and you’ve met them. I am absolutely sure of it. They show up in every comment section and reply thread where someone powerful does something that looks, on its face, like a mistake - and their argument always runs the same way: you don’t understand, this is actually part of a larger plan, there’s a strategy here that you and I can’t see because we’re not operating at that annointed and elevated level…
Which is, of course, one of the things you have to do when dealing with shooting some bullshit in order to get to your next level of argument, you have to deploy arguments as to why the people who might show up to say hey that's bullshit are actually the stupid people who talk the bullshit, and not you.
As an example of the genre it's pretty tepid, they manage the "I'm telling you the truth part" and the talking down part of the message, but I personally find the best of this genre always includes a pithy little witticism that is just so bitchy and deliciously mean that nobody wants to make the bullshit accusation. At least that's my recommend!
I give it a C+/B- for effort.
on edit: I of course mean what the original article did, in making its flippant comments, not what Arainach did.
the larger plan people are not the historians, IMHO that's clearly a description of people who spent a bit too much time reading about conspiracy theories (and generally are too partisanal)
reply