I agree that keeping things up to date is a good practice, and it would be nice if enterprise CISOs would get on board with that. One challenge we've seen is that other aspects of the business don't want things to be updated automatically, in the same way a fully-managed SaaS would be. This is especially true if the product sits in a revenue generation stream. We deal with "customer XYZ is going to update to version 23 next Tuesday at 6pm eastern" all the time.
This is true even with fully-managed SaaS though. There are always users who don't want the new UI, the changed workflow, the moved button. But the update mechanism isn't really the problem IMO, feature flags and gradual rollouts solve this much better than version pinning
Sure. I'm just saying in the context where fully-managed SaaS was already decided not to be an option, and a customer is deploying vendor code in their environments, the update mechanism can in fact be a problem. It's not just poor CISO management.
> So you're stuck debugging a system you don't control, through screenshots and copy-pasted logs on a Zoom call.
This is very real.
I work with a deployment that operates in this fashion. Although unfortunately, we can't maintain _any_ connection back to our servers. Pull or push, doesn't matter.
The goal right now is to build out tooling to export logs and telemetry data from an environment, such that a customer could trigger that export on our request, or (ideally) as part of the support ticketing process. Then our engineers can analyze async. This can be a ton of data though, so we're trying to figure out what to compress and how. We also have the challenge of figuring out how to scrub logs of any potentially sensitive information. Even IDs, file names, etc that only matter to customers.
I also used to work with on-premise installs of Kubernetes and their “security” postures prevented any in-bound access. It was a painful process of requesting access, getting on a zoom call and then controlling their screen via a Windows client and putty. It’s was beyond painful and frustrating. I tried to pitch using a tool like Twingate which doesn’t open any inbound ports, can be locked down very tight using SSO, 2fa, access control rules, and IP limiting but to no avail. They were stuck in their Windows based IT mentally.
For most enterprises there's too many jobs on the line to replace windows.
The people who know where to click and which dialog will pop up and when to click next are never going to agree to replace their non-automatable windows servers with fully automatable linux servers.
I mean, we're talking about a demographic that can't use ssh, never been on a platform using system package managers, and has little to no ability to version system changes.
> This can be a ton of data though, so we're trying to figure out what to compress and how. We also have the challenge of figuring out how to scrub logs of any potentially sensitive information.
This is fundamentally a data modeling problem. Currently computer telemetry data are just little bags of utf-8 bytes, or at best something like list<map<bytes, bytes>>. IMO this needs to change from the ground up. Logging libraries should emit structured data, conforming to a user supplied schema. Not some open-ended schema that tries to be everything to everyone. Then it's easy to solve both problems--each field is a typed column which can be compressed optimally, and marking a field as "safe" is something encoded in its type. So upon export, only the safe fields make it off the box, or out of the VPC, or whatever--note you can have a richer ACL structure than just "safe yes/no".
I applaud the industry for trying so hard for so long to make everything backwards compatible with the unstructured bytes base case, but I'm not sure that's ever really been the right north star.
Yeah. There are good reasons things are bad. But there's also a foolish consistency. Like, you can just do things! If you decide monitoring is important you can decide not to outsource it. Most everyone doesn't, though. Probably because they don't think it's very important, and the existing tools get it done well enough, and it's the muscle memory of the subjectively familiar (if objectively fantastically overpriced).
Well, in the early days of infrastructure growth, when designing bespoke monitoring systems and protocols would be relatively low-cost, it's still nowhere near the highest-ROI way to spend your tech team's time and energy.
And to do it right (i.e. low-risk of of having it blow up with negative effects on the larger business goals), you need someone fairly experienced or maybe even specialized in that area. If you have that person, they are on the team because of their other skills, which you need more urgently.
SaaS, COTS, and open source monitoring tools have to cater to the existing customers. The sales pitch is "easy to integrate". So even they are not incentivized to build something new.
It boils down to the fact that stream-of-bytes is extremely well-understood, and almost always good enough. Infinitely flexible, low-ceremony, no patents, and comes preinstalled on everything (emitters and consumers). It's like HTTP in that way.
And the evolution is similar too. It'll always be stream-of-bytes, but you can emit in JSON or protobuf etc, if it's worth the cognitive overhead to do so. All the hyperscalers do this, even when the original emitter (web servers, etc) is just blindly spewing atrocious CLF/quirky-SSV text.
> It'll always be stream-of-bytes, but you can emit in JSON or protobuf etc, if it's worth the cognitive overhead to do so.
This is the crux of it. That's great until you encounter a need for a schema, and then it's "schema-on-read" or some similar abomination. And the need might not manifest until you're pushing like 1TB/day or more of telemetry data with hundreds or thousands of engineers working on some >1MLoC monstrosity. Hard to dig out of that hole.
The situation is tragically optimal--we've achieved some kind of multiobjective local maximum on a rock in the sewer at the bottom of a picturesque alpine valley and declared victory. We should do better.
> The situation is tragically optimal--we've achieved some kind of multiobjective local maximum on a rock in the sewer at the bottom of a picturesque alpine valley and declared victory. We should do better.
But it's a very comfortable rock. pointy in all the right places.
I agree to a point. Although, alcohol (when consumed responsibly) has a social element to it, so companies having a "beers on Friday after 4pm" just feels different than "here's nicotine so you can be more productive and make us more money." They are serving different functions.
> just feels different than "here's nicotine so you can be more productive and make us more money."
More likely these companies just offered to give them some free vending machines and some office manager said sure why not. Not everything is a careful corporate strategy.
Beers on Friday after 4pm is rarely done because management really cares about the employees. It's a type of team building, improves employee morale and humanizes management. All lead to improved productivity in the long term.
If my boss gives me a stimulant to be more productive, especially a relatively harmless one like nicotine, I would gladly take it, as I like stimulants and am an adult capable of making decisions for myself. If I didn't, I would just refuse, just how I might refuse the free coffee by boss offers me.
I doubt anyone is forcing the employees to take the stimulants. That would be bad, indeed.
> It's a type of team building, improves employee morale and humanizes management. All lead to improved productivity in the long term.
Yes, but the important distinction is that its intention is to bring people together face-to-face, not isolate them to their desks for continued work. Just because the end goal is "productivity" broadly speaking, doesn't mean the mechanisms are socially/morally equivalent.
> I doubt anyone is forcing the employees to take the stimulants.
I agree, and I hope my comment didn't imply I thought that was the case.
I don't see how offering nicotine (or caffeine, or amphetamine) is morally wrong if it's not mandatory. In fact, given 2 companies that only differ in what they offer - free beer on Fridays for 1 hour or free stimulants all the time - I would choose the second one in a heart beat. Many people wouldn't, and that's their choice. I just don't see how one approach is better ethically than the other at all.
That's unnecessarily cynical. I've been in plenty of companies where the staff, managers included, enjoy going for a pint together. I'm in the UK, maybe it's cultural.
I agree, I made it too black and white. I should've said that some, probably most (in my opinion), of such decisions are made with productivity in mind, whether it's in the form of team cohesiveness or favorable view of management, but some are just because managers have the best interest in mind for their subordinates.
OTOH, if you've witnessed how most managers talk about their employees to one another, it's with a cold calculating language. Sure, a manager may feel bad for firing an employee, but first and foremost is the business analysis of whether it makes sense to do so - just pure math and predictions.
Personally, if I was in a management position and an employee asked me for a cigarette, I would happily give it to them. In fact, a few times a week I give a few cigarettes to 1 person who is not my employee, but who I talk to from time to time. I don't gain anything from this and I give them cigarettes because they are on a tight budget.
Also, if I, as a hypothetical manager, realized a lot of my employees would take an offer for free coffee, cigarettes, pure nicotine, beer or another drug, I would give it to them as long as it didn't hurt productivity too much. Of course some drugs like alcohol could hurt short term productivity, but it would make them happier overall, which would have positive long term effects. If asked why I do this, I would say that I'm both giving them out of my good will and to increase productivity, which would be true.
reply