Hacker Newsnew | past | comments | ask | show | jobs | submit | zachrip's commentslogin

I think the accessibility checks only take into account the text color, not the actual real world readability of given text which in this case is impossible to read because of the font weight.

Where did they say the prompt cache is shortened?

from 1h to 5 minutes, was in the news recently

But not as a solid fact?

The HN thread in question is here (and had that info edited out of the title)

https://news.ycombinator.com/item?id=47736476


Fetch has also lacked support for features that xhr has had for over a decade now. For example upload progress. It's slowly catching up though, upload progress is the only thing I'd choose xhr for.


You can pipe through a TransformStream that counts how many bytes you've uploaded, right?


That would show how quickly the data is passing into the native fetch call but doesn’t account for kind of internal buffer it might have, network latency etc


That is a way to approximate it, though I'd be curious to know the semantics compared to xhr - would they both show the same value at the same network lifecycle of a given byte?


This is a pretty widely known acronym


Oauth with mcp is more than just traditional oauth. It allows dynamic client registration among other things, so any mcp client can connect to any mcp server without the developers on either side having to issue client ids, secrets, etc. Obviously a cli could use DCR as well, but afaik nobody really does that, and again, your cli doesn't run in claude or chatgpt.


Can you clarify what you mean?


Title: "California’s New Bill Requires DOJ-Approved 3D Printers That Report on Themselves"

Actual fact: California’s New Bill Requires that 3D Printers Get DOJ Approval as Firearm-Blocking"

(The "report on themselves" is fiction invented by Adafruit.)


"to be able to get a 3D printer" is implied in the "requires" wording. There's no problem with that part.


I actually think the title is misleading. I'm not sure actual existing deployments are affected? Seemingly just new ones are not working?


Railway founder here. <3%.

That said, we treat this exigently seriously!

Any downtime is unacceptable and we'll have a post mortem up in the next couple hours


it seemed to have been all deployments that had a browser facing interface. id say some cloudflair DNS config messup


I've been using railway a while now, and I've basically never paid them but I would. It's even better than heroku. Super easy to use.


Their 5$ monthly has been far more than enough for me to host my demos.


Would like to see the eval version - the dialogue version just seems like normal code with extra steps?


yeah, the previous example was quite basic. I will write a complete example for that, but here is how you can run dynamic code:

   import { task } from "@capsule-run/sdk";

   export default task({
     name: "main",
     compute: "HIGH",
   }, async () => {
     const untrustedCode = "const x = 10; x * 2 + 5;";
     const result = eval(untrustedCode);
     return result;
   });
Hope that helps!


Is the code in the eval also turned into wasm first then? Does this work as a JIT for wasm?


It actually works a bit differently. The eval is executed by the interpreter running inside the isolated wasm sandbox (StarlingMonkey). You can think of it as each sandbox having its own dedicated JavaScript engine.


Can you give a real world example?


I think the examples here are pretty good: https://boringsql.com/posts/beyond-start-end-columns/


This is kind of a complicated example, but here goes:

Say we want to create a report that determines how long a machine has been down, but we only want to count time during normal operational hours (aka operational downtime).

Normally this would be as simple as counting the time between when the machine was first reported down, to when it was reported to be back up. However, since we're only allowed to count certain time ranges within a day as operational downtime, we need a way to essentially "mask out" the non-operational hours. This can be done efficiently by finding the intersection of various time ranges and summing the duration of each of these intersections.

In the case of PostgreSQL, I would start by creating a tsrange (timestamp range) that encompases the entire time range that the machine was down. I would then create multiple tsranges (one for each day the machine was down), limited to each day's operational hours. For each one of these operational hour ranges I would then take the intersection of it against the entire downtime range, and sum the duration of each of these intersecting time ranges to get the amount of operational downtime for the machine.

PostgreSQL has a number of range functions and operators that can make this very easy and efficient. In this example I would make use of the '*' operator to determine what part of two time ranges intersect, and then subtract the upper-bound (using the upper() range function) of that range intersection with its lower-bound (using the lower() range function) to get the time duration of only the "overlapping" parts of the two time ranges.

Here's a list of functions and operators that can be used on range types:

https://www.postgresql.org/docs/9.3/functions-range.html

Hope this helps.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: