This is from the original authors of ZSNES. I think they know what they're doing.
smartassery aside LLMs are pretty shit at esoteric stuff like this. Especially retro stuff in my experience they mainly tend to get super excited about how awesome and retro it is & reiterate misunderstood factoids about it that it knows that aren't that important/that you probably know already. Like showing it to a Reddit comment section.
> This is from the original authors of ZSNES. I think they know what they're doing.
ZSNES is popular because of legacy and nostalgia. It was very fast and came at just the right time, but the developers aren't quite coding gods. Their expertise on the SNES is no match for late/later developers like Near and Sour.
With Super ZSNES, being a Unity project, you can get a fairly clear decompile of the IR, and the code doesn't seem all that impressive. It's alpha-quality, but generally coded like the original ZSNES was. Optimization is completely missing and accuracy is still out of whack, but synchronization is improved and it's doing a little better job of counting cycles. "GPU-powered" is a big stretch. ~~They're only taking advantage of it for fixed-function perspective transform on mode 7~~ Scratch that. It borrows the line-based algorithm from bsnes-hd, including the trick to interpolate the transform variables between mins and maxes. So the only GPU feature it uses is blending for bump-mapping.
Not competing, I just develop it occasionally for fun. I'm the first to suggest bsnes or another of Near's projects as a first choice. Mesen2 is pretty good, too, but the Avalonia UI kind of irks me. Definitely run that through Retroarch.
I don't see anything worthwhile in this Super ZSNES project yet. We were all ZSNES users back in the days.
Funny, we now enter the era of "Made with Handcrafted Code" or "Handmade" . Same way as furniture, carpets and any other "handcrafts" are made now... or Lamborghinis
"No vibe coding" is an ambiguous claim. I was asking whether they meant no AI-generated implementation, no AI assistance whatsoever, or just "not primarily generated by prompts."
Since it's obviously written in a casual, conversational, tone we should not expect the language to be perfectly precise. So, given that and the fact that the author felt the need to call out "vibe coding" or AI at all, and then double down by adding the almost-redundant "classic development style", I would be willing to bet they did not use any AI for anything at all related to this project.
I find the specific singling out of vibe coding interesting for a different reason; thinking back to just last month, I recall one of the rationales behind the huge DLSS5 backlash was it ruined the artists original vision. And here we are a month later being amazed at an emulator that literally lets any casual player do just that through a funky point and click interface!
I guess if they added in an MCP server there would probably be a riot.
It only makes sense for hobby projects where the outcome is just an excuse for the journey. I mean if the point is to have fun coding, you want to do it yourself.
"no vibe coding" is different from "no ai". I'm not sure where the authors are going with this. No autocomplete? What level of autocomplete? No "deep learning"?
I do not see instructions to assist in task decomposition and agent ~"motivation" to stay aligned over long periods as cargo culting.
See up thread for anecdotes [1].
> Decompose the user's query into all required sub-requests and confirm that each one is completed. Do not stop after completing only part of the request. Only terminate your turn when you are sure the problem is solved.
I see this as a portrayal of the strength of 5.5, since it suggests the ability to be assigned this clearly important role to ~one shot requests like this.
I've been using a cli-ai-first task tool I wrote to process complex "parent" or "umberella" into decomposed subtasks and then execute on them.
This has allowed my workflows to float above the ups and downs of model performance.
That said, having the AI do the planning for a big request like this internally is not good outside a demo.
Because, you want the planning of the AI to be part of the historical context and available for forensics due to stalls, unwound details or other unexpected issues at any point along the way.
The way LLMs are being forced upon the workforce in tech is just as bad, actually, yes.
> Flamethrowers are inherently dangerous to the operator and are ~intended to be used burn things to the ground.
I actually think bringing up this point reinforces the analogy rather than undercutting it. LLMs are ~intended to spread disinformation, eg. Deepseek on 1989, Grok going full Mecha-Hitler, ChatGPT selling out prompts to advertisers. One of the biggest impacts LLMs will have on human society is as a propaganda tool with a reach of billions.
It has shown surprising stickiness. Occupying some middle ground between full adoption and still ~in the code.
I am starting to see some potential in moving back away from pure terminal, a mixed modality with AI. But it is not in the direction of IDE in any traditional sense.
This sounded pretty good, a ~mullvad for LLM. Then:
> Strongwall.ai is led by Andrew Northwall, CEO and Bryce Nyeggen, CTO. Andrew has 20+ years in tech, former COO of Trump Media & Technology Group, architect behind the relaunch of Parler, and senior technologist for large-scale infrastructure and AI systems.
2. Multiple simultaneous projects
3. Orchestration that includes handling of CI workflow
4. Active work to further improve or refine tooling
5. Experimentation producing muscle memory as experience versus code output
reply