Hacker Newsnew | past | comments | ask | show | jobs | submit | bredren's commentslogin

1. Worktrees

2. Multiple simultaneous projects

3. Orchestration that includes handling of CI workflow

4. Active work to further improve or refine tooling

5. Experimentation producing muscle memory as experience versus code output


This line makes a valid point. People record strangers all the time. In an obvious way or trying to be sneaky.

Just because you don’t notice it doesn’t mean it doesn’t happen.

However, this is still a different thing than smart glasses which can further be segmented into who designed the smart glasses.


One of the features is “no vibe coding, classic development style.”

I think that’s kind of interesting, especially when building a retro enablement.

But I wonder does this mean no AI was used at all? Even for say, code review?

No judgment either way just curious for clarification.


This is from the original authors of ZSNES. I think they know what they're doing.

smartassery aside LLMs are pretty shit at esoteric stuff like this. Especially retro stuff in my experience they mainly tend to get super excited about how awesome and retro it is & reiterate misunderstood factoids about it that it knows that aren't that important/that you probably know already. Like showing it to a Reddit comment section.


> This is from the original authors of ZSNES. I think they know what they're doing.

ZSNES is popular because of legacy and nostalgia. It was very fast and came at just the right time, but the developers aren't quite coding gods. Their expertise on the SNES is no match for late/later developers like Near and Sour.

With Super ZSNES, being a Unity project, you can get a fairly clear decompile of the IR, and the code doesn't seem all that impressive. It's alpha-quality, but generally coded like the original ZSNES was. Optimization is completely missing and accuracy is still out of whack, but synchronization is improved and it's doing a little better job of counting cycles. "GPU-powered" is a big stretch. ~~They're only taking advantage of it for fixed-function perspective transform on mode 7~~ Scratch that. It borrows the line-based algorithm from bsnes-hd, including the trick to interpolate the transform variables between mins and maxes. So the only GPU feature it uses is blending for bump-mapping.


Full disclosure since it wasn't mentioned: BearOso is a developer for a competing emulator project, Snes9x.

Not competing, I just develop it occasionally for fun. I'm the first to suggest bsnes or another of Near's projects as a first choice. Mesen2 is pretty good, too, but the Avalonia UI kind of irks me. Definitely run that through Retroarch.

I don't see anything worthwhile in this Super ZSNES project yet. We were all ZSNES users back in the days.


It's no Super SNES emulator, but Claude has had a bit of trouble porting an old VB Net application I had from 15 years ago to a newer web framework.

Funny, we now enter the era of "Made with Handcrafted Code" or "Handmade" . Same way as furniture, carpets and any other "handcrafts" are made now... or Lamborghinis

it's doubly funny because once the tools are released to public i bet majority of those high-res mods will be ai generated.

> But I wonder does this mean no AI was used at all? Even for say, code review?

Would that be surprising to you?


Why do you ask?

I’m just curious if you are so dependent on LLMs that the idea of not using them at all seems extreme to you

I see, good to be curious.

"No vibe coding" is an ambiguous claim. I was asking whether they meant no AI-generated implementation, no AI assistance whatsoever, or just "not primarily generated by prompts."


Since it's obviously written in a casual, conversational, tone we should not expect the language to be perfectly precise. So, given that and the fact that the author felt the need to call out "vibe coding" or AI at all, and then double down by adding the almost-redundant "classic development style", I would be willing to bet they did not use any AI for anything at all related to this project.

I find the specific singling out of vibe coding interesting for a different reason; thinking back to just last month, I recall one of the rationales behind the huge DLSS5 backlash was it ruined the artists original vision. And here we are a month later being amazed at an emulator that literally lets any casual player do just that through a funky point and click interface!

I guess if they added in an MCP server there would probably be a riot.


It only makes sense for hobby projects where the outcome is just an excuse for the journey. I mean if the point is to have fun coding, you want to do it yourself.

there is no way to measure it. The source code isnt available. Better a "classic development style" closed source or a "vibe coded" but open source?

"no vibe coding" is different from "no ai". I'm not sure where the authors are going with this. No autocomplete? What level of autocomplete? No "deep learning"?

The prompt did not specify advanced gameplay.

I do not see instructions to assist in task decomposition and agent ~"motivation" to stay aligned over long periods as cargo culting.

See up thread for anecdotes [1].

> Decompose the user's query into all required sub-requests and confirm that each one is completed. Do not stop after completing only part of the request. Only terminate your turn when you are sure the problem is solved.

I see this as a portrayal of the strength of 5.5, since it suggests the ability to be assigned this clearly important role to ~one shot requests like this.

I've been using a cli-ai-first task tool I wrote to process complex "parent" or "umberella" into decomposed subtasks and then execute on them.

This has allowed my workflows to float above the ups and downs of model performance.

That said, having the AI do the planning for a big request like this internally is not good outside a demo.

Because, you want the planning of the AI to be part of the historical context and available for forensics due to stalls, unwound details or other unexpected issues at any point along the way.

[1] https://news.ycombinator.com/item?id=47879819


> ...this is equivalent to providing all of your employees with a flamethrower, and then saying they bear all responsibility for the fires they start.

This is essentially the policy of most SWE groups but with PRs merged, though right?

You can / should use AI to accelerate SWE workflows and assist in reviews but if you merge something that is bad or breaks production that is on you.

> "Hey, don't blame us for giving them flamethrowers, it's company policy not to burn everything to the ground!".

Flamethrowers are inherently dangerous to the operator and are ~intended to be used burn things to the ground.

I'm no expert on arms but there is probably a simile with a better fit out there.


The way LLMs are being forced upon the workforce in tech is just as bad, actually, yes.

> Flamethrowers are inherently dangerous to the operator and are ~intended to be used burn things to the ground.

I actually think bringing up this point reinforces the analogy rather than undercutting it. LLMs are ~intended to spread disinformation, eg. Deepseek on 1989, Grok going full Mecha-Hitler, ChatGPT selling out prompts to advertisers. One of the biggest impacts LLMs will have on human society is as a propaganda tool with a reach of billions.


I am counting on physical, semi technical contract work to pay once SWE opportunities shrink to the point where it’s not worth it anymore.

Now is the time to get handy if not already. Robotics /physical automation will lag info by a good stretch.


It has shown surprising stickiness. Occupying some middle ground between full adoption and still ~in the code.

I am starting to see some potential in moving back away from pure terminal, a mixed modality with AI. But it is not in the direction of IDE in any traditional sense.


A secret backup test to the pelican? This is as noteworthy as 4.7 dropping.


That flamingo is hilarious. Is that his beak or a huge joint he's smoking?


With the sunglasses, the long flamingo neck and the "joint", I immediately thought of the poster for Fear And Loathing In Las Vegas:

https://www.imdb.com/title/tt0120669/mediaviewer/rm264790937...

EDIT: Actually, it must be a beak. If you zoom in, only one eye is visible and it's facing to the left. The sunglasses are actually on sideways!


I thought this would provide easy query access, but it does not seem to.

Is there a CLI that queries hn.algolia.com and returns structured data?


This sounded pretty good, a ~mullvad for LLM. Then:

> Strongwall.ai is led by Andrew Northwall, CEO and Bryce Nyeggen, CTO. Andrew has 20+ years in tech, former COO of Trump Media & Technology Group, architect behind the relaunch of Parler, and senior technologist for large-scale infrastructure and AI systems.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: