Commercial infosec is deleting firefox from develop machines, because it's not secure and explaining to muggles why they shouldn't commit secret material to the code repository. That and blocking my ssh access to home router of course.
I mean, often, yep. The real reason why they are unhappy with you having an unsupported browser is simply that it's much harder to reason about or enforce policies across bespoke environments. And in an enterprise of a sufficient scale, the probability that one of your employees is making a mistake today is basically 1. Someone is installing an infostealer browser extension, someone is typing in their password on a phishing site, etc. So, you really want to keep browsers on a tight leash and have robust monitoring and reporting around that.
Yeah, it sucks. But you're getting paid, among other things, to put up with some amount of corporate suckiness.
"The real reason why they are unhappy with you having an unsupported browser"
I tend to encourage Firefox over Cr flavoured browsers because FF (for me) are the absolute last to dive in with fads and will boneheadedly argue against useful stuff until the cows come home ... Web Serial springs to mind (which should finally be rocking up real soon now).
Oh and they are not sponsored by Google errm ... 8)
I'm old enough to remember having to use telnet to access the www (when it finally rocked up and looked rather like Gopher and WAIS) (via a X.25 PAD) and I have seen the word "unsupported" bandied around way too often since to basically mean "walled garden".
I think that when you end up using the term "unsupported browser" you have lost any possible argument based on reason or common decency.
The thing that kills me every time is how IT treat development machines the same way as the rest of the corporate network.
Developers usually need elevated privileges, executing unverified arbitrary code is literally their job. Their machines are not trustworthy, and yet, they often have access to the entire company internal network. So you get a situation where they have both too much privilege (access to resources beyond the scope of their work) and too little (some dev tools being unavailable).
Oh they finally got that up and running? That's good, but extremely late. It released in 2021. That's half a decade. As long as running an upstream kernel means you have to use 5+ year old SoCs, running upstream Linux instead of a vendor kernel remains completely out of the question for most circumstances.
I used to run some weird 5.x kernel until I found about collabora and then I still had to cook my own fdt files and patch some weird stuff in kernel, keeping my own local branch, but yeah, it's always the same story with upstreaming, sadly. Been there, done that since OpenEZX days.
But now, I just did the system update, rebooted and got 7.0.0-1 from the package manager, which is never than my x86 laptop. I still have trust issues with this, expecting it to not boot or get up without HDMI output zo.
I used to work on a project that interfaced with both SABRE and Amadeus and "just works" isn't how I would describe it. The thing is also quite slow and annoying, as it's interface is optimized for the trained operator to use it in a terminal setting and not for us poor shcmucks calling it through some weird API bolted on top.
Also, try to retrieve a PNR on an airline website or do like anything on the airline's own website -- the UX is usually pretty bad and the data loading takes forever. For that too the GDS is to blame.
That's two different problems really. You mostly use libusb when you own the application part too, think of an utility to burn firmware into phones or something like. You can also make a userspace driver for some device classes, like input. If you combine the two, you can let the rest of the system use the device without knowing what it's driver is.
If they actually were decided to be infringements somehow, there are millions of different cases needed already, so it is already past the point of enforcement.
These sorts of things are almost never tested legally and it seems even less likely now.
>If I and a company of 1000 people create the same product and compete for customers, the company's version will win. Every single time.
As a person who works for a company with 25k people, I would disagree. You, a single person will often get to the basic product that a lot of people will want much faster than a company with 1k, 5k and 25k people.
Bigger companies are constrained by internal processes, piles of existing stuff, and inability to hire at the scale they need and larger required context. Also regulation and all that. Bigger companies are also really slow to adapt, so they would rather let you build the product and then buy out your company with your product and people who build it. They are at at a temporary disadvantage every time the landscape shifts.
The point wasn't about the number of people, the point was a company which employs that number of people has enough money which can be converted to leverage against you.
Besides that, your whole arguments hinges on large companies being inflexible, inefficient and poorly run. Isn't that exactly the kind of problem AI promises to solve? Complete AI surveillance of every employee, tasks and instructions tailored to each individual and superhuman planning. Of course at that point, the only employees will be manual workers because actual AI will be much better and cheaper at everything than every human, except those things where it needs to interact with the physical world. Even contract negotiations with both employees and customers will be done with AI instead of humans, the human will only sign off on it for legal requirements just like today you technically enter a contract with a representative of the company who is not even there when you talk to a negotiator.
Large companies are often inflexible and inefficient as a matter of deliberate strategy. I've found myself in scenarios where we have a complete software artifact that a smaller company would launch and find successful, but we can't launch it, because we have to satisfy some expectation we've set or do a complex integration with some important other system of ours.
A lesson from gamedev is that players will deliberately restrict themselves - sometimes to make the game more fun or challenging, sometimes to appeal to their aesthetic principles.
If/when superhuman AI is achieved, those limitations will all go away. An owner will just give it money and control and tell it to optimize for more money or political power or whatever he wants.
That's a much scarier future than a paperclip maximizer because it's much closer and it doesn't require complete takeover first, it'll be just business as usual, except more somehow more sociopathic.
reply