> The important piece of information is "is this brand good, right now, when I'm looking to make a purchase."
Right, which is the very thing that makes branding less than useful. You have to research everything before every purchase regardless of the brand precisely because the brand is no longer a good indicator of quality. That means that the brand doesn't mean much. Just because a brand signified high-quality goods yesterday doesn't mean it signifies the same today.
If this isn't a joke, I'd be very interested in the reasoning behind that statement, and whether or not there are some qualifications on when it applies.
humans are very good at overlooking edge cases, off by one errors etc.
so if you generate test data randomly you have a higher chance of "accidentally" running into overlooked edge cases
you could say there is a "adding more random -> cost" ladder like
- no randomness, no cost, nothing gained
- a bit of randomness, very small cost, very rarely beneficial (<- doable in unit tests)
- (limited) prop testing, high cost (test runs multiple times with many random values), decent chance to find incorrect edge cases (<- can be barely doable in unit tests, if limited enough, often feature gates as too expensive)
- (full) prop testing/fuzzing, very very high cost, very high chance incorrect edge cases are found IFF the domain isn't too large (<- a full test run might need days to complete)
I've learnt that if a test only fails sometimes, it can take a long time for somebody to actually investigate the cause,in the meantime it's written off as just another flaky test. If there really is a bug, it will probably surface sooner in production than it gets fixed.
Flaky tests are a very strong signal of a bug, somewhere. Problem is it's not always easy to tell if the bug's in the test or in the code under test. The developer who would rather re-run the test to make it pass than investigate probably thinks it's the test which is buggy.
people often take flaky test way less serious then they should
I had multiple bigger production issues which had been caught by tests >1 month before they happened in production, but where written off as flaky tests (ironically this was also not related to any random test data but more load/race condition related things which failed when too many tests which created full separate tenants for isolation happened to run at the same time).
And in some CI environments flaky test are too painful, so using "actual" random data isn't viable and a fixed seed has to be used on CI (that is if you can, because too much libs/tools/etc. do not allow that). At least for "merge approval" runs. That many CI systems suck badly the moment you project and team size isn't around the size of a toy project doesn't help either.
Can't one get randomness and determinism at the same time? Randomly generate the data, but do so when building the test, not when running the test. This way something that fails will consistently fail, but you also have better chances of finding the missed edge cases that humans would overlook. Seeded randomness might also be great, as it is far cleaner to generate and expand/update/redo, but still deterministic when it comes time to debug an issue.
Most test frameworks I have seen that support non-determinism in some way print the random seed at the start of the run, and let you specify the seed when you run the tests yourself. It's a good practice for precisely the reasons you wrote.
Absolutely for things like (pseudo) random-number streams.
Some tests can be at the mercy of details that are hard to control, e.g. thread scheduling, thermal-based CPU throttling, or memory pressure from other activity on the system
There's another good reason that hasn't been detailed in the comments so far: expressing intent.
A test should communicate its reason for testing the subject, and when an input is generated or random, it clearly communicates that this test doesn't care about the specific _value_ of that input, it's focussed on something else.
This has other beneficial effects on test suites, especially as they change over the lifetime of their subjects:
* keeping test data isolated, avoiding coupling across tests
* avoiding magic strings
* and as mentioned in this thread, any "flakiness" is probably a signal of an edge-case that should be handled deterministically
and
* it's more fun [1]
If it was math_multiply(), then adding the jitter would fail - that would have to be multiplied in.
Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.
Damn, must be why only white hair is growing on my head now.
>Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.
So the concept of random is still there but expressed differently ? (= Am I partially right ?)
Yes, the randomness is still there but less manually specified by the developer. But also I haven't actually used it myself but had seen stuff on it before, so I had the wrong term: it's "property-based testing" you want to look for.
Randomness is useful if you expect your code to do the correct thing with some probability. You test lots of different samples and if they fail more than you expect then you should review the code. You wouldn't test dynamic random samples of add(x, y) because you wouldn't expect it to always return 3, but in this case it wouldn't hurt.
* focus locally; getting invoked with local politics by supporting local candidates with your time and effort - the state department runs programs to talk to city and state officials concerning foreign policy matters and city’s and local governments can create pressure on federal representatives from those states.
* vote with your wallet; boycotts and divestments are tools ordinary people have to effect conglomerates. Ensure your retirement money is not invested with companies engaging with the political ideas you do not agree with
* protest; attending in person events shows leaders numbers and images that are harder to ignore than their consultants’ polling data.
I've done all of those and while I think they are important i believe it's most important to let politicians know, otherwise they rely too much on money.
Pointing that whatever people think they are doing is not working does not mean we have to propose a solution. I'd suggest revolution, but that won't ever happen in the US.
It's one of those ideas that sounds nice in theory, but doesn't survive contact with the real world. In the same way that many people would say that you shouldn't negotiate with terrorists or kidnappers; but if it's their loved one who's being held and tortured they'll very quickly change their mind.
Getting to a world where no one pays ransoms and the ransomware groups give up and go away would be the ideal, and we'd all love to get there. But outlawing paying ransoms basically sacrificing everyone who gets ransomwared in the meantime until we get to that state for the greater good.
And where companies get hit, they'll try hard to find ways around that, because the alternative may well be shutting down the business. But if something like a hospital gets hit, are governments really going to be able to stand behind the "you can't pay a ransom" policy when that could directly lead to deaths?
Financial costs won't solve the problem for companies, because they're hard to enforce. You'd be weighting up the cost of dealing with the fallout of getting hacked against the cost of paying the random and the chance that you might get caught and fined. If that former cost is existential for the business, then it'd always be worth paying and taking the risk.
The only real way around that would personal consequences for the owners/directors of the company - "get caught paying a ransom and the whole board goes to jail" would certainly discourage people. And also provide a wonderful opportunity for blackmail when people did.
Not to mention all the problems of fining public sector organisations, and how counter-productive that usually is.
Right, make the penalty for paying a ransom catastrophic. Very few employees will risk a criminal conviction and years in federal prison just to protect their employer.
It's all fun and games until it's your livelihood at stake, and then it makes a lot more sense to acquiesce, lick your wounds, and keep your business alive.
Getting hacked is no fun, but companies don't deserve to die because something in their tech stack was vulnerable.
I respectfully disagree - I do agree that the natural financial death of a company probably shouldn't result in bailouts, but if I as a company get breached because my fully-updated, follows-best-practices Windows Domain got hacked because of a vulnerability in Microsoft's stuff? That's hardly fair.
Shouldn't I be able to sue Microsoft for financial relief?
That is an acceptable outcome. Life isn't fair. Companies fail all the time for a variety of unfair reasons. This will force customers to demand that Microsoft and other software vendors improve their own security practices and/or indemnify customers for damages from breaches. You can sue Microsoft for financial relief if they breach your contract.
I work in the state government space. Many targets/victims of ransomware are small/local government agencies and the ransom demands are greater than their annual budgets. Not every agency is big enough to have someone (bored) come in on Sunday, notice stuff getting encrypted and then run in to the server room and hit the big red button like Virginia's legislature in 2021[0].
Many ransoms are far more than the victim can actually pay. Not all ransom payments result in a decryption key that actually works.
Most local governments lack the scale and budget to competently maintain their own IT infrastructure. It's not just security but everything. They should outsource the infrastructure layer to a large contractor, or possibly to the state government.
Contracting IT services at that level overpays by a whole number multiple for worse results because the government doesn’t have the in-house expertise to tell when the contractor is doing something wrong. (This is one reason why many construction projects go over budget: someone saved by laying off the engineers, so they pay 2-3x more for contractor A to oversee contractor B, guaranteeing 3+ party disputes for every problem)
What does work better is outsourcing an entire function: if you pay Gmail for email services, you know exactly how much it will cost per user and have an SLA for problems which they can’t blame on you.
I don't think you can enforce such a rule. I think it's a good approach too.
Another issue is that not paying up and risking restore from underfunded ops dept. might be more expensive than paying up AND making a selected executive look bad. And we can't have that, can we.
It would make the ransomware statistic go down without actually stopping crime. Any company that considers paying the ransom would have a strong incentive to never report the security incident to avoid being punished for ransom payments
Plus it gives the ransomware gangs a whole new angle they can use.
So, remember how you illegally paid us a ransom a few months ago? Unless you want to go to prison, then you better...
We're already seeing this against companies who pay ransoms and fail to report the breaches when they're legally required to - but it would be much worse if it's against individuals who are criminally liable.
Make employees criminally liable for making ransom payments, along with whistleblower protections. Very few employees will risk going to prison to protect their employer. You can always get another job.
I don't think this helps anybody. There will always be some poor soul taking the blame for the crimes of the higher ups. And what exactly the crime would be? Using company money to pay an unspecified third party? Also pretty hard to enforce.
It should be a crime to knowingly transfer money to criminals for any reason. And it wouldn't not hard to enforce: offer bounties to whistleblowers who turn in their colleagues.
It likely is in many places, under laws relating to dealing with proceeds of crime, but I’m not aware of any prosecutions having ever been made on this basis.
Agreed - it’s not that it’s a bad point but it would be an ineffective rule which is usually an excuse to forgo other more effective (usually more expensive) options
Unfortunately the actual solution will probably have to mirror real world, which means balkanizing the Internet to clarify legal jurisdiction, maybe some international police task force to aid with cross-border investigation, but ultimately it all hinges on whether and how much the countries with most nuclear aircraft carriers are willing to pressure other countries to take this seriously.
All that does is make the problem more expensive by whatever cut the middle men who will pop up take and however much the overhead of the obfuscation is. It might reduce payments at the margin, but probably not enough to be worth the cost.
> The fact that humanity sent people back to the moon barely even registered.
Are you sure that people would have cared much even in better times?
Although I'm just as subject to the fatigue as everyone else, this just isn't a pursuit that I see as important.
TBH I think dealing with global warming, cancer, homelessness, AI impact on human cognitive development, and the loneliness epidemic are far higher priorities.
If I recall correctly opinion polling on the original Apollo program wasn't universally positive either. Space missions don't impress people who want money spent on the ground, it etc
The famous spoken word poem Whitey on the Moon was on exactly this topic.
"Accompanied by conga drums, Scott-Heron's narrative tells of medical debt, high taxes and poverty experienced at the time of the Apollo Moon landings. The poem critiques the resources spent on the space program while Black Americans were experiencing social and economic disparities at home."
Related, I've been surprised that we haven't had more violence against corporations and/or their leadership in the vein of Luigi Mangione.
E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.
But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.
[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.
Litigation—the hope or fantasy to make a buck—soaks up a lot of the million-man animus I’d guess.
If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.
A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.
Those unhinged people might be busy in social media bubbles, fighting endless pointless battles (or simply doom scrolling) until they're too exhausted to do anything.
I thought the whole trick was arbitrage on the delayed awareness of reduced quality.
reply