With products coming out like this, many claim to do basically the same thing.
How is this better than Claude Code's built in agent orchestrator? Do I need 100 agent types? How do I know the trained agents here are somehow better? Specialization doesn't equate to "better" in every case.
I want to see the light but at this point it feels like these kinds of projects need a better way to benchmark how they are improving on the available state of the art.
It feels like in 2018, a new browser state management tool emerging. Why does this exist?
Aside from it being "instructions for agents", I'm not sure I understand how this isn't just a markdown file that more or less reads like a readme that targets more junior engineers.
I am curious about how this compares to dataview. As a dataview user, I'm not immediately seeing something bases does that dataview doesn't, but I am not a power user.
Dataview can be used for queries that output tables, but its strength is letting you write essentially custom imperative Javascript code that renders stuff in notes dynamically (dataviewjs mode). Whereas "queries to tables" is more or less what Bases does, the dataviewjs mode will probably always be unique.
Garner Health | Data Engineer II | Full-time | NYC Onsite / Hybrid (4 days/week) | https://job-boards.greenhouse.io/garnerhealth/jobs/552040000...
Garner Health (https://getgarner.com) is revolutionizing healthcare economics through advanced doctor performance analytics and innovative incentive models. Our platform is reshaping how organizations access high-quality, affordable care, powering decisions at leading healthcare systems and enterprise clients. We’ve doubled revenue annually for 5 years running, making us the fastest-growing company in our space.
We’re hiring a Data Engineer II to play a pivotal role in building our enterprise-grade data platform from the ground up, ensuring secure data access across our rapidly scaling organization.
Stack: AWS, Snowflake, Argo, dbt, Terraform, Airbyte, JetStreamWork style: Hybrid in NYC. In-office up to 3 days per week.Target compensation: $120,000 - $160,000 + equity
I'll add to the conversation another interesting technique from Chris Voss, which is to use no-oriented questions.
People like to say no. (I'm not sure what this cognitive bias is, but anecdotally I agree.)
So, if you can frame your requests in a way that "no is permission", it will often get a red light a bit easier.
Example: replace "Is this a good idea?" with "is this a bad idea?"
Now, of course "not a bad idea" is not the same thing as "good idea", but it's a lot more likely. Even reading that, I imagine most people would respond more intuitively, because it helps us avoid a commitment we don't necessarily want to adopt.
How is this better than Claude Code's built in agent orchestrator? Do I need 100 agent types? How do I know the trained agents here are somehow better? Specialization doesn't equate to "better" in every case.
I want to see the light but at this point it feels like these kinds of projects need a better way to benchmark how they are improving on the available state of the art.
It feels like in 2018, a new browser state management tool emerging. Why does this exist?