Hacker Newsnew | past | comments | ask | show | jobs | submit | halter73's commentslogin

> 6. People who realize the game theory optimal strategy is to announce you're pressing blue and convince everyone else to press blue, but privately press red.

A lot of this analysis depends on accurately guessing how people will react, so it's probably hard to say any strategy is game theory optimal without a lot of unrealistic simplifying assumptions.

In a world where you're able to convince a lot of people anything, it might better to convince everyone to press red. If it looks like 99.99% of people will press red without your influence, you're probably best off spending your time convincing the .01% who might press blue not to do so.

It also has the upside of not making you a dirty liar. I wonder, what would Kant think about this hypothetical?


I think this conflates two different eras/layers. NT 4 famously moved the window manager/GDI/graphics subsystem into kernel mode, so that’s probably the “opposite direction” history. But modern GPU-driver recovery is WDDM/TDR, and it very much still exists: WDDM splits the display driver into user-mode and kernel-mode components, and TDR resets/recovers a hung GPU/driver instead of requiring a reboot.

https://learn.microsoft.com/en-us/windows-hardware/drivers/d... https://learn.microsoft.com/en-us/windows-hardware/drivers/d...

I also update NVIDIA drivers regularly on Windows 11 without rebooting, though that’s install-time driver reload rather than exactly the same thing as TDR.


And I don't recall any practical way to recover from a crashed NT 3.x GUI subsystem.

It would presumably still function as a server, and be gracefully shut down remotely, but in the absence of anything like Remote Desktop or EMS[1], you'd be hard pressed to get much local troubleshooting done without rebooting the system anyway.

Also, as an NT user since 3.1, and a daily user of 3.5 and 3.51, I don't recall the GUI ever actually hanging or crashing (other than as a side effect of a bugcheck, which, by definition, is a crash triggered by code running in kernel mode).

That's one of the main reasons I was an early and enthusiastic NT user: while I can't say its performance was any better than "good enough", and then only on hardware that was at least comfortably above average in terms of CPU speed and RAM capacity, it was remarkably stable compared to every other PC OS I had used at the time.

Which, to be fair, would have been limited to MS-DOS, 16-bit Windows 2.x and 3.x, and OS/2 2.0 at the time, though it remained true throughout the lifespan of Windows 9x and OS/2 (at least through 3.0, the last version I used), and neither FreeBSD nor Linux were as reliable once you added at least a basic X11 environment to reach rough feature parity (and while X11 did allow recovery after crashing, insofar as it can be restarted without rebooting the system, it still took all your GUI applications and xterm windows down with it when it crashed).

[1] https://en.wikipedia.org/wiki/Emergency_Management_Services


No, but they chose to include it. Presumably there were a lot less apt references they chose not to include.


There's a link at the top of the README to https://0x00eb.itch.io/tux-racer where you can play it without needing to host it yourself.


I'm not sure this is old enough, but could you be referencing https://neal.fun/infinite-craft/ from https://news.ycombinator.com/item?id=39205020?


Thanks, no it wasn't that, it was a basic HTML form.


> Each request will hit a new random worker, but your response is supposed to go on some other long-running SSE stream.

It seems your knowledge is a little out of date. The big difference between the older SSE transport and the new "Streamable HTTP" transport is that the JSON-RPC response is supposed to be in the HTTP response body for the POST request containing the JSON-RPC request, not "some other long-running SSE stream". The response to the POST can be a text/event-stream if you want to send things like progress notifications before the final JSON-RPC response, or it can be a plain application/json response with a single JSON-RPC response message.

If you search the web for "MCP Streamable HTTP Lambda", you'll find plenty of working examples. I'm a little sympathetic to the argument that MCP is currently underspecified in some ways. For example, the spec doesn't currently mandate that the server MUST include the JSON-RPC response directly in the HTTP response body to the initiating POST request. Instead, it's something the spec says the server SHOULD do.

Currently, for my client-side Streamable implementation in the MCP C# SDK, we consider it an error if the response body ends without a JSON-RPC response we're expecting, and we haven't gotten complaints yet, but it's still very early. For now, it seems better to raise what's likely to be an error rather than wait for a timeout. However, we might change the behavior if and when we add resumability/redelivery support.

I think a lot of people in the comments are complaining about the Streamable HTTP transport without reading it [1]. I'm not saying it's perfect. It's still undergoing active development. Just on the Streamable HTTP front, we've removed batching support [2], because it added a fair amount of additional complexity without much additional value, and I'm sure we'll make plenty more changes. As someone who's implemented a production HTTP/1, HTTP/2 and HTTP/3 server that implements [3], and also helped implement automatic OpenAPI Document generation [4], no protocol is perfect. The HTTP spec misspells "referrer" and it has a race condition when a client tries to send a request over an idle "keep-alive" connection at the same time the server tries to close it. The HTTP/2 spec lets the client just open and RST streams without the server having any way to apply backpressure on new requests. I don't have big complaints about HTTP/3 yet (and I'm sure part of that is a lot of the complexity in HTTP/2 was properly handled by the transport layer which for Kestrel means msquic), but give it more time and usage and I'm sure I'll have some. That's okay though, real artists ship.

1: https://modelcontextprotocol.io/specification/2025-03-26/bas...

2: https://github.com/modelcontextprotocol/modelcontextprotocol...

3: https://learn.microsoft.com/aspnet/core/fundamentals/servers...

4: https://learn.microsoft.com/aspnet/core/fundamentals/openapi...


Client is allowed to start a new request providing sessionid which should maintain the state from previous request.

Where do you store this state?


That github issue is closed because it's been mostly completed. As of https://github.com/modelcontextprotocol/modelcontextprotocol..., the latest draft specification does not require the resource server to act as or poxy to the IdP. It just hasn't made its way to a ratified spec yet, but SDKs are already implementing the draft.


It really depends on if you're dealing with an async stream or a single async result as the input to the next function. If a is an access token needed to access resource b, you cannot access a and b at the same time. You have to serialize your operations.


It probably depends on your lists. My uBlock Origin config had a global CSS rule blocking any elements with a #mc_embed_signup id.


Can you provide a citation for this? I’ve read older RFCs that "recommend" recipients allow single LFs to terminate headers for robustness. I’ve also read newer RFCs that weaken that recommendation and merely say the recipient "MAY" allow single LFs. I’ve never noticed an HTTP RFC say you can send headers without the full CRLF sequence, but maybe I missed something.

https://datatracker.ietf.org/doc/html/rfc2616#section-19.3 https://datatracker.ietf.org/doc/html/rfc9112#section-2.2


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: