Well it's easy you fabricate complete horseshit business case, fudge all the numbers, create a nifty slide deck and raise enough VC money to pay your early users in order to bootstrap your business.
Becoming an expert in one thing also narrows down the potential suitable work tremendously. Also these days nobody wants to pay the expert prices since.. Claude can so the expert stuff with a non-expert (at least in their mind)
Usually experts are T shaped. Acquiring expertise always means the time spent is away from learning something else.
The deeper and greater the expertise the more niche the topic usually becomes and the less demand there is.
The world might need X million web developers but how many experts are there in browser technology. Or even experts in that domain something more niche like rendering or rendering niche like Angle and WebGL..you already go this deep and it boils down to a handful of individuals.
Also I didn't say that there would not be demand just that many businesses are not willing to pay for it anymore. Industry layoffs, AI are huge leverages that any potential employer can use to have all the advantage when negotiating compensation.
The T shape is important - but the base of the T doesn't have to be in tech. If you're an expert in a particular niche and a generalist in a particular business you'll find work.
E.g., a web developer who knows a lot about how lawyers run their business.
> Claude can so the expert stuff with a non-expert (at least in their mind)
Opus is far better at most surface-level tasks than it is at tasks that require deep knowledge and understanding of domains; someone who is a complete generalist (who thus has only surface level knowledge in many, many things) is far more replaceable with LLMs than someone who has deep knowledge in one.
Consider the way LLMs actually are created; they are not created from billions of repos with deep knowledge behind them. The majority of their knowledge comes from a massive amount of surface-level work that's been done and can be sampled from: React starter templates, starter templates + what little customization someone needed, blog-tutorial-level stuff.
It's indisputable (borderline tautological) that specialization trades breadth for depth. This (obviously?) implies the risk of targeting a narrower market, and the upside of being more attractive to that smaller population. It's a typical "quality over quantity" tradeoff.
To say there's no "sliver of truth" in pointing that out (let alone w/ an unwarranted jab about projecting fears) is... strange and maybe hypocritical. TLDR your response came across as emotional and passive-aggressive, and confusing.
> It's indisputable (borderline tautological) that specialization trades breadth for depth
I do not necessarily agree with this as stated. A specialist will have access to many roles within their speciality that are not open to a generalist. The market for generalists without deep expertise is also extremely crowded.
Even if it's true that AI can replace an expert, and I really don't think it is, except in the simplest minds, the AI training companies are aggressively hiring experts...
I tried that too, I called it "agents". (This was long before AI-mania.) An agent was an object that handled some aspect of behavior (like gravity and collision physics) "on behalf of" some entity, hence the name. The word I was actually searching for was probably "delegate", but I was a stupid 20-something.
ECS is to me still conceptually cleaner and easier to work with, if more tedious and boilerplate-y.
The other day I was working with some shaders GLSL signed distance field functions. I asked Claude to review the code and it immediately offered to replace some functions with "known solutions". Turns out those functions were basically a verbatim copy of Inigo Quilez's work.
His work is available with a permissible license on the Internet but somehow it doesn't seem right that a tool will just regurgitate someone else's work without any mention of copyright or license or original authorship.
Pre-LLM world one would at least have had to search for this information, find the site, understand the license and acknowledge who the author is. Post LLM the tool will just blatantly plagiarize someone else work which you can then sign off on as your own. Disgusting.
> Turns out those functions were basically a verbatim copy of Inigo Quilez's work.
Are they? A lot of these were used by people >20 years before Inigo wrote his blog posts. I wrote RenderMan shaders for VFX in the 90's professionally; you think about the problem, you "discover" (?) the math.
So they were known because they were known (a lot of them are also trivial).
Inio's main credit is for cataloging them, especially the 3D ones, and making this knowledge available in one place, excellently presented.
And of course, Shadertoy and the community and giving this knowledge a stage to play out in that way. I would say no one deserves more credit for getting people hooked on shader writing and proceduralism in rendering than this man.
But I would not feel bad about the math being regurgiated by an LLM.
There were very few people writing shaders (mostly for VFX, in RenderMan SL) in the 90's and after.
So apart from the "Texturing and Modeling -- A Procedural Approach" book, the "The RenderMan Companion" and "Advanced RenderMan", there was no literature. The GPU Gems series closed some gaps in later years.
The RenderMan Repository website was what had shader source and all pattern stuff was implict (what we call 2D SDFs today) beause of the REYES architecture of the renderers.
But knowledge about using SDFs in shaders mostly lived in people's heads. Whoever would write about it online would thus get quoted by an LLM.
Yeah, I find this super rude - in this example, the author distributed the code under a very permissive license, basically just wanting you to cite him as an author.
BAM, the LLM just strips all that out, basically pretending it just conjured an elegant solution from the thin air.
No wonder some people started calling the current generation of "AI" plagiarism machines - it really seems more fitting by the day.
LLMs have already told you these are "known solutions", which implicitly means they are established, non-original approaches. So the key point is really on the user side—if you simply ask one more question, like where these "known solutions" come from, the LLM will likely tell you that these formulas are attributed to Inigo Quilez.
So in my view, if you treat an LLM as a tool for retrieving knowledge or solutions, there isn't really a problem here. And honestly, the line between "knowledge" and "creation" can be quite blurry. For example, when you use Newton's Second Law (F = ma), you don't explicitly state that it comes from Isaac Newton every time—but that doesn't mean you're not respecting his contribution.
> Pre-LLM world one would at least have had to search for this information, find the site, understand the license and acknowledge who the author is. Post LLM the tool will just blatantly plagiarize someone else work which you can then sign off on as your own
These don't contradict each other though, you could "blatantly plagiarize someone else work" before as well. LLMs just add another layer in between.
Copyright violation would happen before LLMs yes, but it would have to be done by a person who either didn’t understand copyright (which is not a valid defence in court), or intentionally chose to ignore it.
With LLMs, future generations are growing up with being handed code that may or not be a verbatim copy of something that someone else originally wrote with specific licensing terms, but with no mention of any license terms or origin being provided by the LLM.
It remains to be seen if there will be any lawsuits in the future specifically about source code that is substantially copied from someone else indirectly via LLM use. In any case I doubt that even if such lawsuits happen they will help small developers writing open source. It would probably be one of the big tech companies suing other companies or persons and any money resulting from such a lawsuit would go to the big tech company suing.
An assertion can be arbitrarily expensive to evaluate. This may be worth the cost in a debug build but not in a release build. If all of assertions are cheap, they likely are not checking nearly as much as they could or should.
Possibly but I've never seen it in practice that some assert evaluation would be the first thing to optimize. Anyway should that happen then consider removing just that assert.
That being said being slow or fast is kinda moot point if the program is not correct. So my advisor to leave always all asserts in. Offensive programming.
Good luck to you. Having worked in this space for around 10 years I can say it's nearly impossible to arouse anyone's interest since the market is so totally saturated.
For a new engine to take on it needs do something else nobody else is doing so that it's got that elusive USP.
Getting visibility, SEO hacking etc is more important than the product itself.
To me this kind of "no need to change anything" implies stability but there's a younger cohort of developers who are used to everything changing every week and who think that something that is older than week is "unmaintained" and thus buggy and broken.
One of the earliest security issues that I remember hitting Windows was that if you had a server running IIS, anyone could easily put a properly encoded string in the browser and run any command by causing IIS to shell out to cmd.
I mentioned in another reply the 12 different ways that you had to define a string depending on which API you had to call.
Can you imagine all of the vulnerabilities in Windows caused by the layers and layers of sediment built up over 30 years?
It would be as if the modern ARM Macs had emulators for 68K, PPC, 32-bit x86 apps and 64K x86 apps (which they do) and had 64 bit Carbon libraries (just to keep Adobe happy)
I think its at least as much of a working environment preference.
Once I became experienced enough to have opinions about things like my editor and terminal emulator... suddenly the Visual Studio environment wasn't nearly as appealing. The Unix philosophy of things being just text than you can just edit in the editor you're already using made much more sense to me than digging through nested submenus to change configuration.
I certainly respect the unmatched Win32 backwards/forwards compatibility story. But as a developer in my younger years, particularly pre-WSL, I could get more modern tools that were less coupled to my OS or language choice, more money, and company culture that was more relevant to my in my 20s jumping into Ruby/Rails development than the Windows development ecosystem despite the things it does really well.
Or to say differently: it wasn't the stability of the API that made Windows development seem boring. It was the kind of companies that did it, the rest of the surrounding ecosystem of tools they did it with, and the way they paid for doing it. (But even when I was actually writing code full time some corners of the JS ecosystem seemed to lean too hard into the wild west mentality. Still do, I suspect, just now its Typescript in support of AI).
Seems to me that really the simplest solution to authors problem is to write C++ safely. I mean...this is a trivial utility app. If you can't get that right in modern C++ you should probably just not even pretend to be a C++ programmer.
C++ is hard to get safe in complex systems with hard performance requirements.
If the system is simple and you don't give a shit about performance, it's very very easy to make C++ safe. Just use shared_ptr everywhere. Or, throw everything in a vector and don't destroy it until the end of the program. Whatever, who cares.
No seriously why would you need a graphics engine for procedurally generating content? In this particular case for example his "content" is the world map expressed in some units (tile grid) across two axis. Then you generation algorithm produces that 2d data and that's that.
Fake it until you make it baby!
reply