Hacker Newsnew | past | comments | ask | show | jobs | submit | obeavs's commentslogin

What an abysmal series of top comments. These guys created a phenomenal product using novel technology, which will only continue to improve. Great work to the Zed team.

FWIW, the top comments at the time of my comment (one hour after yours, two hours after the article was posted) are all complimentary. You commented one hour after the article was posted; it's worth waiting a bit for the comment voting to shake out.

Further discussion from dang on the "contrarian dynamic": https://news.ycombinator.com/item?id=24215601

This comment could easily be expanded into an essay on the sociology of social media, wisdom-to-word ratio is insane.

> sociology of social media

Probably one reason why "rage bait marketing" actually works.


Wow I'm sad I've never seen that before!! From 6 years ago and it perfectly describes this entire comment section

What exactly is phenomenal and novel about Zed? I've tried it a couple of times for a week or so, didn't see the point, and moved on every time.

And I'm not luddite swearing by vi or something, I use VSCode and Idea, and have used Sublime for many years, Xcode on/off for some Obj-C/Swift dev, Eclipse for 5-6 years in the 2000s, and vim for everything cli/lightweight since forever.

Is the GUI tech what's supposed to be novel? I couldn't care less about that backend in my everyday editor use as long as the editor is fast enough. Which on modern hardware, even Idea is.

Don't get me wrong, it's a good editor still.


Currently on this machine: using 900MB of RAM, including all language servers, with nine open projects - that is pretty phenomenal. VSCode could barely keep one open with the same memory.

The perception of 'fast' is very subjective. To me having a smooth, jitter-free UI, low input latency, and instant startup, all matter a lot.


It's amazing that a gig of ram is considered lightweight for having 8 project dirs open in an editor, which normally means 8 tree views and a few open file tabs per project :)

Even more amazing that 10GB for the same purpose is considered acceptable. ± 100MB for window, project files, LSP servers, ASTs etc is something very few editors can achieve - I'm pretty sure Zed beats both Emacs and Neovim in memory consumption.

"Including all language servers" is a big part of that. I hope.

I’ll stick to my butterflies.

Good 'ol C-x M-c M-butterfly

I understand wanting your software to be well optimized, but at no point in my years of using VSCode have I ever actually had to care about how much RAM it's using. I have 32GB, I'm going to use it.

I made the mistake of buying an 8 GB macbook air m3 a while ago, thinking it would be enough. I wasn't accounting for docker or vscode. It REALLY lags. The vim mode plugin will regularly lag on nearly every keystroke, until I kill everything and restart.

On the topic of vim, the built-in vim mode in zed is really good. The helix mode is great too!!


I, too, would like to use my RAM. And I would like to be able to use it on the things I deem important, not to subsidize the laziness of devs who reach for Electron.

>I, too, would like to use my RAM.

I'd like to BETTER use my RAM, and have faster programs to boot (as programs who overuse RAM also are slower than more optimized ones).


Maybe use it to run a small local LLM + Zed instead of just VS Code?

(I’m probably off on how much memory it takes to run a small LLM, but still.)


VS Code is also offering significant more ability than Zed at the moment. If you want to sell RAM-usage as a phenomenal benefit, then you should compare it with similar editors, like Sublime or (Neo)Vim.

But why have 9 open projects?

Like the vast majority of the time I have one. If I want to switch projects I close and then reopen.

On the other hand if it was smoother on one large project that would be an advantage.


My experience with Zed differed. On Linux I found it to be very memory hungry.

A side effect of Electron crap, before Zed many editors and IDEs on Atari, Amiga, Windows, OS/2, BeOS, Mac OS, NeXTSTEP, were written in fully native code.

I heard that Zed has very impressive collaboration features. I tried them a little and they really look well (like discord, but directly in editor). But this was very superficial look

VSCode extensions and the ecosystem is a security time-bomb. Zed looks to be doing things better.

Zed literally downloads random executables and runs them by default without asking

What?! Really?! Link? I'm not a Zed user. That comment was based off a few minutes of research, and I guess a small dose hopium of a VSCode user and understanding what a shit show the extensions setup is and wanting someone to do better.

Yep, it pulls stuff from at least npm, it’s not a secret - check the source code.

Actually it pulls latest versions (checking registry then installing that exact version, not sure why they sidestep normal resolution algorithms) no matter what .npmrc may say, so min-release-age breaks almost everywhere it integrates with JS/TS ecosystem (most visibly, Copilot). I probably should’ve filed an issue.

It also installs Go packages but I haven’t looked into that.


Recent example I looked at: https://github.com/nilskch/zed-jj-lsp, which downloads jj-lsp if not found in the system. I have seen other extensions doing similar for convenience, but can't remember names to give concrete links.

Copying my own comment below, with GH links and my (non-AI) summary after skimming:

> https://github.com/zed-industries/zed/issues/7054

> https://github.com/zed-industries/zed/issues/12589

> TL;DR: Mix of language tooling, unsigned proprietary blobs, corrupted and/or GLIBC-dependent files, redundant copies of already-installed executables. The Node packages especially are able to run scripts on install. Personal preference aside, might also create issues with security laws, certifications. All without user consent.

> Issues opened in January and June 2024. They've been rejected, closed, and opened a couple times since then. No changes directly improving this yet as of April 2026.

So... If you want broad language support via LSP servers, then you're going to have to bring in other ecosystems, and Node/Typescript is a big one that doesn't always have alternatives. [0] That's not a Zed-specific problem.

IMO the real issue with Zed is the "runs them by default without asking" part. Plus the questionable practices with binary blobs and the cavalier attitude in the discussions, when I can just use an editor that... Doesn't do any of that.

[0] https://microsoft.github.io/language-server-protocol/impleme...


What are they doing with proprietary binary blobs? I thought it's open source.


Yes, this is annoying. When doing editor testing, I always also have to open the activity monitor and force quit all extra processes started by Zed.

From what I can see, one of the top comments (at the time of this comment) is worried about legalese claiming to have "non-exclusive, worldwide, royalty-free, fully paid-up, non-sublicensable" access to your source code. I think it is very fair criticism to not want to give away your source code.

https://news.ycombinator.com/item?id=47953501


There is this big recurrent misunderstanding with zed's ToS that produces some extreme, misinformed reactions. Zed (the ide) is open source ("GPL v3 with Apache 2.0 for certain components"). The ToS are only applicable for the services they provide when you subscribe for an account and use them [0]. Some people read the ToS, think they apply just for the editor, and think that zed is stealing your source code and whatnot, because it would indeed be weird to have these ToS for just editing stuff locally, without any of the additional services provided. However, if one actually reads the ToS instead of nitpicking a paragraph, it is very clear what it is about.

Any ToS for a company that you send data to process includes similar terms that just allow it to process the data as you are expecting them to in order to provide these services. Eg for the AI tab-completion, they process part of your source code on their servers and provide a tab-completion suggestion ("derivative data"). Some people are evidently either unfamiliar about how these things work or about the data-related legalese terms used (barring any bad intentions assumed). If anything, paragraph 4.2 [1] makes it clear that any data output is owned by the user (and not by zed). This whole discussion made me read the terms and (apart from the arbitration thing imo, though not uncommon), I couldn't see any kind of dark pattern or issue.

I like zed a lot, it works great, I am definitely cautious about the fact that they have received VC money which holds me from getting "all-in", but criticism that is based on misunderstandings or on obviously factually wrong arguments is not very useful.

[0] https://github.com/zed-industries/zed/issues/50568#issuecomm... (could be some better source than a github comment but it has been repeated many times)

[1] https://zed.dev/terms#42-customers-ownership-of-output


yeah, all forms of criticism, all feature suggestions, any comparisons to other products/solutions, etc. should be outright banned by HN. if you aren't praising the thing, get out!

(do you comment this same type of thing on github, microsoft, apple, etc. posts? all of these comments seem absolutely tame compared to the vitriol in those threads. most top comments here are supportive. most of the negative ones are constructive.)


Did Zed ever answer their code of conduct violation?

https://github.com/zed-industries/zed/discussions/36604


Maybe this wasn't true an hour ago, but all the top 3 comments right now look supportive (if I am to count yours), and the next few are just mildly critical.

Maybe they'd be better if the title were informative.

Yep. The intentionally obscure titles on here are just inexcusable.

^^the #1 reason I limit my daily time allowance for HN

I think the Zed team's enthusiasm adds a lot of momentum to the product, on top of their indisputable engineering capabilities.

I agree, the toxic trolls focusing on minor nitpicks like a license agreement that allows your text editor to steal all your source codes really harshes the vibes.

Maybe they shouldn't be releasing it with anti-consumer terms of service? People's objections are legitimate. Where else should they be discussed?

I hope HN can appreciate what a game changer (and paradigm shift) Zed can be.

To the Zed developers: CONGRATULATIONS! I have been following your project with great interest since your speed demo years ago. And since it’s AI-first, I’m interested to see how we can integrate it with https://safebots.ai (Safebots, Safebox, and Safecloud).

I would love to see how we might be able to increase the safety of agents in Zed, use local models like Qwen/Deepseek and we also have Grokers which can turn any codebase into a graph with tree-sitter and help your agents far more than RAG and similarity search (https://grokers.ai/deck.pdf)

What’s the best way of getting in touch? (If you want, my profile has a way of emailing me).


https://graphify.net and trail of bits’s trail mark https://github.com/trailofbits/trailmark

Both use treesitter and create knowledge graphs for llm use. It results in way less tokens spent as well.


Yes, several projects have been going in the right direction.

But also - see https://safebots.ai/grokers.html


I haven't read them because they're now buried, but whatever those top comments said can't be bad enough to warrant your vitriol. Abysmal means bad, not pessimistic. It's inappropriate for the (currently) top comment to be casting such judgment in its preface.

I don't think overly-opinionated meta-comments are inherently bad, but I don't come to this site for them. I don't even think your comment is bad; I'm mad that this is what the people of HN have decided is the most important remark on the matter. It tells me something unfortunate.


So, we've been down this rabbithole at Phosphor (phosphor.co) and have explored/made a couple of really big technology bets on it.

The most unique/useful applications of it in production are based on combining dependent types with database/graph queries as a means. This enables you to take something like RDF which is neat in a lot of ways but has a lot of limitations, add typing and logic to the queries, in order to generally reimagine how you think about querying databases.

For those interested in exploring this space from a "I'd like to build something real with this", I'd strongly recommend checking out TypeDB (typedb.com). It's been in development for about a decade, is faster than MongoDB for vast swaths of things, and is one of the most ergonomic frameworks we've found to designing complex data applications (Phosphor's core is similar in many ways to Palantir's ontology concept). We went into it assuming that we were exploring a brand new technology, and have found it to work pretty comprehensively for all kinds of production settings.


Can you expand on

"We build non-dilutive growth engines for industrial and climate technology companies by creating high quality development pipelines for institutional capital."


Sure. Would contextualize by saying that infrastructure is a financial product: climate/industrial projects are sited in the physical world and have a hard upfront cost to produce a long term stream of cash flows, which, from a finance perspective, makes it look a lot like debt (e.g. I pay par value in order to achieve [x] cash flows with [y] risk).

When you drive past a solar project on the side of the road, you see the solar technology producing energy. But in order for a bank to fund $100M to construct the project, it has to be "developed" as if it were a long-term financial product across 15 or so major agreements (power offtake, lease agreement, property tax negotiations, etc). The fragmentation of tools and context among all the various counterparties involved to pull this sort of thing together into a creditworthy package for funding is enormously inefficient and as a result, processes which should be parallelize-able can't be parallelized, creating large amounts of risk into the project development process.

While all kinds of asset class-specific tools exist for solar or real estate or whatever, most of them are extremely limited in function because almost of those things abstract down into a narrative that you're communicating to a given party at any given time (including your own investment committee), and a vast swath of factual information represented by deterministic financial calculations and hundreds if not thousands of pages of legal documentation.

We build technology to centralize/coordinate/version control these workflows in order to unlock an order of magnitude more efficiency across that entire process in its totality. But instead of selling software, we sell those development + financing outcomes (which is where _all_ of the value is in this space), because we're actually able to scale that work far more effectively than anyone else right now.


Reminds me a lot of AngelList, which was initially nominally just a mailing list that connected angels and early stage startups, but eventually found that the restriction was in special purpose vehicles and automated the hard legal work of making many individual funding vehicles, and thus was behind the scenes actually a legal services company, if you squint.


What made you choose TypeDB? What kind of performance are you getting out of it?


Just spotted this! We (I'm CTO at TypeDB) just released some early benchmarks: https://typedb.com/blog/first-look-at-typedb-3-benchmarks/


Phosphor | NYC/Remote | Full Time | Founding Engineer | $150-225k + Equity

Phosphor builds recursively self-improving systems (via malleable software and natural language interfaces) to enable the efficient development and financing of the physical world (infrastructure, energy, real estate).

Our product is built around a powerful object model which combines document editors with proprietary programming languages for financial modeling, computable contracts, etc, in order to build one of the more unique RLHF feedback loops that we've seen to date.

A primary technical bet is that if we can productize version control properly, we can capture annotated "diffs" of user information with user-validated annotations. By doing so, we create path-dependence, which enables us to specify system-level goals for agents to solve for.

Our product is similar a combination of Linear and Wolfram, with components and objects that enable advanced financial modeling, legal/regulatory analysis, and geospatial analysis of infrastructure development opportunities.

We're venture-backed by one of the best deeptech funds in the market and are hiring for roles spanning product engineers, CRDT wizards, and compiler/calculation engine leads.

Job listings can be found at the bottom of phosphor.co.


You can try Moonbit. It's extraordinarily well designed and compiles to highly optimized JS, WASM or even native


Thank you for bringing this up. This is profoundly true for big projects (toll roads/transport) and small infra projects (e.g. community solar). The length of time that it takes to develop things like this, combined with the turnover and the sheer amount of context that single developer has to have to be successful with it, is one of the driving forces in why development is such a difficult/risky business.

It's one of the most consequential problems imaginable to solve, particularly as the US begins to realize that we need to compete with decades of China's subsidized energy and industrialization/manufacturing capacity.

Taking it a level deeper, what most don't realize is that infrastructure is an asset class: before someone funds the construction of $100M of solar technology, a developer will spend 2-5 years developing 15 or so major commercial agreements that enable a lender/financier to take comfort that when they deploy such a large amount of cash, they'll achieve a target yield over 20+ years. Orchestrating these negotiations (with multiple "adversaries") into a single, successfully bankable project is remarkably difficult and compared to the talent needed, very few have the slightest clue how to do this successfully.

Our bet at Phosphor is that this is actually solvable by combining natural language interfaces with really sophisticated version control and programming languages that read like english for financial models and legal agreements, which enables program verification. This is a really hard technical challenge because version control like Git really doesn't work: you need to be able to synchronize multiple lenses of change sets where each lens/branch is a dynamic document that gets negotiated. Dynamically composable change sets all the way down.

We are definitely solving this at Phosphor (phosphor.co) and we're actively hiring for whoever is interested in working at the intersection of HCI, program verification, natural language interfaces and distributed systems.


Hey! This is really awesome. Do you intend to support analysis on redlining/tracked changes? That's where it would become very useful for my use cases.


Yes, this is the one that always gets me in the MS ecosystem. Would make a few of my workflows so much better.


Phosphor | NYC | Full Time | Founding Engineer (HCI focus) | $175-225k + Equity

Phosphor enables the development of the built world (e.g. from real estate to energy projects) to be managed agentically by building programming languages and observability primitives (like version control) on top of a hypergraph.

Our primary technical bet is that if you capture annotated "diffs" of user information with the appropriate annotation, you can create path-dependence to train AI models in an AlphaGo like context. By doing so, we achieve agentic experiences for markets that have never even had an opportunity to imagine what life would be like with basic observability.

Our product is similar a combination of Linear and Wolfram, with components and objects that enable advanced financial modeling, legal/regulatory analysis, and geospatial analysis of infrastructure development opportunities.

We're in stealth, but recently venture-backed by one of the best deeptech funds in the market. We're hiring for a few roles at the senior/staff levels: 1. HCI Engineer - You're a prosemirror wizard who probably follows Ink & Switch or the UCSB HCI lab on Twitter; front-end/rich-text/typescript focused; lots of architectural/sync engine work 2. Systems/Compiler Engineer - We design and build dev tools for languages we create that compile into various graph representations. These range from financial modeling calcs (which need to go very fast to support a seamless UX), to computable representations of legal agreements. Extra points if you've worked with hypergraphs. This in Rust but we're exploring a few other languages.

Email resume/linkedin/twitter to oliver@phosphor.co if you might fit somewhere in here, or if you've taken a credible attempt at building end-user programming tools.


We're probably one of the first to make a real tech bet on Loro. We inched our way into it, and plus or minus some edge cases, it is going very well so far.

Even on the edge cases, most of it just relates to what primitives are exposed in the API, and we've found the library's author to be highly engaged in creating solutions.

We've found it to be an incredibly well designed library.


Congratulations to the team!! I've been following for some time and love a good DX story.

I'd love to get some commentary from any active users on tradeoffs re: adopting tech like LiveView vs the community size and scale of JS land.

For example, JS land benefits massively from libraries like ProseMirror or even any of the more advanced CRDT libraries like Loro or Automerge. How about the AI story?

Is there a clear path to adopting this JS-based tech? Is it not needed? Would love to get a better understanding of the tradeoffs from folks who live in both worlds.


It's tough because LiveView is really just the dessert at the Elixir dining hall-- you can't live off of dessert. It's (at least) an order of magnitude smaller an ecosystem than React and the like, and while average library quality is very high, you won't find ready-made solutions to all your use cases like you do in those big front end ecosystems. JS and LiveView do interoperate surprisingly well, so ProseMirror isn't off the table, but I still think there are important benefits in the big front-end ecosystems.

Nevermind front end though, the main course is Erlang/Elixir's concise, functional, concurrency paradigm that feels more discovered than invented. The default structures they provide for thinking about message-passing actors are so much easier than tangled webs of async functions. This means CRDTs, calling out to APIs, running jobs in other languages, realtime coms, all go very well in Elixir.

I think Actors are a paradigm shift somewhat akin to garbage collection. Increasingly complex programs demanded we abstract away memory management to stay sane, knowing we'd drop down to memory manage when needed. In this web-heavy world, we abstract into tiny statefull services (actors) to stay sane, knowing we'll drop down to sequential languages when needed.


Integrating javascript with liveview and pushing and receiving events from client to server (and from server to client) is pretty simple using hooks: https://hexdocs.pm/phoenix_live_view/js-interop.html#client-...

The AI story is mostly centered around the Nx project: https://github.com/elixir-nx/


Cons: you do not have access to the extensive scale of the JS ecosystem.

Pros: you do not need access to the extensive scale of the JS ecosystem. And you will not need to write as much JS, if at all.

If you do not have Stockholm Syndrome for Javascript, just switch to Liveview. And Erlang/Elixir is just a comfy yet secure platform to build serious apps.


Automerge wise, there's a ton of effort behind ElectricSQL which is written in Elixir and can also be run as part of an Elixir app, so you can get a lot of the same benefits of local first clients, afaict.

There's a langchain implementation that's fairly mature and definitely in production use (I saw the authors handle above actually :D ). Langgraph-style libraries exist (there's one called Magus that I've used) but I think that's where there could be some more efforts. Although it's important to note that building something comparable to langgraph isn't too hard in Elixir with its process model, and most Elixir devs could probably do it, but unfortunately that's not obvious to your average person searching "langgraph implementation in Elixir". There's no langsmith integration but the telemetry implementation in Erlang and Elixir is really nice so once some patterns around running chains and graphs emerge publicly (there's a few companies that I'd bet have private repos implementing their own equivalents of langgraph) I imagine integrating to langsmith would go pretty quick


We've actually also implemented Phoenix sync using Electric: https://hexdocs.pm/electric_phoenix/Electric.Phoenix.html

So you can have local-first sync in a Phoenix app using Electric. And you can use Electric to sync data into a LiveView using Phoenix.Streams, which is a very natural fit.

We have a couple of example apps showing things in action:

- https://github.com/electric-sql/electric/tree/main/examples/... - https://github.com/electric-sql/electric/tree/main/examples/...


Worth nothing that the former lead dev of ReScript left to create a WASM-first language call Moonbit (https://www.moonbitlang.com/). The language design is awesome and corrects a lot of the corners that they just couldn't get around in OCAML. A touch early, but really well done.


Thanks, I didnt know about Moonbitlang.


Worth noting*

Not nothing ;-)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: