The format is editable. The line chart seems always to be scaled so the minima is at the bottom, but you can get the zero point by changing it to bars.
The options do seem a bit idiosyncratic, but I guess they are useful for the kind of data the site users usually look at.
Broken axes aren't the solution. Starting from 0 is but nobody making graphs seems to understand or, in the case of journalists, they're trying to mislead their readers. I suppose readers enjoy being awed by dramatically changing graphs too.
It would also be nice to include a shaded area for the first standard deviation over a relevant period of time to get an idea of how far outside normal it is.
In my unhinged pipedreams, we’d have some sort of standard for conveying the data directly so users could use browser settings to decide how to display the data. There like a dozen people in the world that would use it, but they’d really really enjoy it I bet.
But it essentially shows the same thing, the covid overhiring boom and then layoff cycle post-covid is over. And jobs are rising again.
What’s absolutely mind blowing to me though…the idea AI isn’t causing software engineering jobs to collapse…which you would think would make people here happy…is something that makes software engineers upset??
It’s almost as if everyone here has married their identity to the idea they are victims of AI progress and any suggestion otherwise is ego destruction.
”What??? You mean the job market is expanding and the reason I can’t find a job is…me? That can’t possibly be true I’m a genius, the data is clearly wrong!”
There was a change in US tax law that revoked the ability of software companies to classify engineer salaries as an R&D expense, which massively increased the tax liability for many software companies.
This is under-recognized by many folks. That full impact of the that aspect of the 2017 TCJA was hard to predict when it was so far in the future, and when it hit, we were dealing with the latter economic impact of covid in addition to these deduction changes.
This was reverted for us employees in OBBB and companies can refile taxes for what they couldn't use as an expense in the intervening time. I think the impact of this is generally overstated
Well, yes, but we're still sitting at ~80% of 2020 levels. Perhaps just hangover from 2022, perhaps the end of ZIRP, but it's still depressed relative to 2020.
There was a hiring bubble in 2022 just before the Fed raised interest rates. I'm not understanding what the mystery is.
The link you're responding to has the option to zoom out more to 2020. If you scroll down to view the other related graphs, you'll find that they also index 2020 as a starting point because they're all tracking this hiring bubble.
Interesting chart that confirms hiring dynamics for SE have not much to do with AI despite all the PR, as in 2023 models and agents capabilities were quite limited, and now that capabilities increase hiring is picking up. I hope more journalists will start to challenge that narrative.
To be clear, I don't believe or endorse most of what that issue claims, just that I was reminded of it.
One of my new pastimes has been morbidly browsing Claude Code issues, as a few issues filed there seem to be from users exhibiting signs of AI psychosis.
Both weapons manufacturers like Lockheed Martin (defending freedom) and cigarette makers like Philip Morris ( "Delivering a Smoke-Free Future.") also claim to be for the public good. Maybe don't believe or rely on anything you hear from business people.
I'd agree, although only in those rare cases where the Russian soldier, his missile, and his motivation to chuck it at you manifested out of entirely nowhere a minute ago.
Otherwise there's an entire chain of causality that ends with this scenario, and the key idea here, you see, is to favor such courses of action as will prevent the formation of the chain rather than support it.
Else you quickly discover that missiles are not instant and killing your Russian does you little good if he kills you right back, although with any chance you'll have a few minutes to meditate on the words "failure mode".
I'm… not really sure what point you're trying to make.
The russian soldier's motivation is manufactured by the putin regime and its incredibly effective multi-generational propaganda machine.
The same propagandists who openly call for the rape, torture, and death of Ukrainian civilians today were not so long ago saying that invading Ukraine would be an insane idea.
You know russian propagandists used to love Zelensky, right?
Somehow I don’t get the impression that US soldiers killed in the Middle East are stoking American bloodlust.
Conversely, russian soldiers are here in Ukraine today, murdering Ukrainians every day. And then when I visit, for example, a tech conference in Berlin, there are somehow always several high-powered nerds with equal enthusiasm for both Rust and the hammer and sickle, who believe all defence tech is immoral, and that forcing Ukrainian men, women, and children to roll over and die is a relatively more moral path to peace.
It's an easy and convenient position. War is bad, maybe my government is bad, ergo they shouldn't have anything to do with it.
Too much of the western world has lived through a period of peace that goes back generations, so probably think things/human nature has changed. The only thing that's really changed is Nuclear weapons/MAD - and I'm sorry Ukraine was made to give them up without the protection it deserved.
Are you going to ask the russians to demilitarise?
As an aside, do you understand how offensive it is to sit and pontificate about ideals such as this while hundreds of thousands of people are dead, and millions are sitting in -15ºC cold without electricity, heating, or running water?
No, I'm simply disagreeing that military technology is a public good. Hundreds of thousands of people wouldn't be dead if Russia had no military technology. If the only reason something exists is to kill people, is it really a public good?
An alternative is to organize the world in a way that makes it not just unnecessary but even more so detrimental to said soldier's interests to launch a missle towards your house in the first place.
The sentence you wrote wouldn't be something you write about (present day) German or French soldiers. Why? Because there are cultural and economic ties to those countries, their people. Shared values. Mutual understanding. You wouldn't claim that the only way to prevent a Frenchmen to kill you is to kill them first.
It's hard to achieve. It's much easier to just mark the strong man, fantasize about a strong military with killing machines that defend the good against the evil. And those Hollywood-esque views are pushed by populists and military industries alike. But they ultimately make all our societies poorer, less safe and arguably less moral.
Again, in the short run and if only Ukraine did that, sure. But that's too simplistic thinking.
If every country doubled its military, then the relative stengths wouldn't change and nobody would be more or less safe. But we'd all be poorer. If instead we work towards a world with more cooperation and less conflict, then the world can get safer without a single dollar more spent on military budgets. There is plenty of research into this. But sadly there is also plenty of lobbying from the military industrial complex. And simplistic fear mongering (with which I'm not attacking you personally, just stating it in general) doesn't help either. Especially tech folks tend to look for technical solutions, which is a category that "more tanks/bombs/drones/..." falls into. But building peace is not necessarily about more tanks. It's not a technical problem, so can't be solved with technical means. In the long run.
Again, in the short run, of course you gotta defend yourself, and your country has my full support.
I can't think of anything scarier than a military planner making life or death decisions with a non-empathetic sycophantic AI. "You're absolutely right!"
I would disagree on the knowledge sharing. They're the only major AI company that's released zero open weight models. Nor do they share any research regarding safety training, even though that's supposedly the whole reason for their existence.
I agree with you on your examples, but would point out there are some places they have contributed excellent content.
In building my custom replacement for Copilot in VS Code, Anthropic's knowledge sharing on what they are doing to make Claude Code better has been invaluable
I've been playing around with this in z-ai and I'm very impressed. For my math/research heavy applications it is up there with GPT-5.2 thinking and Gemini 3 Pro. And its well ahead of K2 thinking and Opus 4.5.
> For my math/research heavy applications it is up there with GPT-5.2 thinking and Gemini 3 Pro. And it’s well ahead of K2 thinking and Opus 4.5.
I wouldn’t use the z-ai subscription for anything work related/serious if I were you. From what I understand, they can train on prompts + output from paying subscribers and I have yet to find an opt-out. Third party hosting providers like synthetic.new are a better bet IMO.
"If you are enterprises or developers using the API Services (“API Services”) available on Z.ai, please refer to the Data Processing Addendum for API Services."
...
In the addendum:
"b) The Company do not store any of the content the Customer or its End Users provide or generate while using our Services. This includes any texts, or other data you input. This information is processed in real-time to provide the Customer and End Users with the API Service and is not saved on our servers.
c) For Customer Data other than those provided under Section 4(b), Company will temporarily store such data for the purposes of providing the API Services or in compliance with applicable laws. The Company will delete such data after the termination of the Terms unless otherwise required by applicable laws."
I stand corrected - it seems they have recently clarified their position on this page towards the very end: https://docs.z.ai/devpack/overview
> Data Privacy
> All Z.ai services are based in Singapore.
> We do not store any of the content you provide or generate while using our Services. This includes any text prompts, images, or other data you input.
From a quick read, this is cool but maybe a little overstated. From Figure 3, completely suppressing these neurons only reduces hallucinations by like ~5% compared to their normal state.
Table 1 is even more odd, H-neurons predicts hallucination ~75% of the time but a similar % of random neurons predict hallucinations ~60% of the time, which doesn't seem like a huge difference to me.
Very cool. Any plans to add support for local models? This has what has prevented us from adopting Positron so far. We have sensitive data and sending to third party APIs is not an option (regardless of their stated retention policies).
Yeah, we just added support for local models. As I mentioned in an earlier comment, if you have a local model with an OpenAI-compatible v1/chat/completions endpoint (most local models have this option), you can route Erdos to use it in the Erdos AI settings.
https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE
reply